Jan 30 21:40:01 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 21:40:01 crc restorecon[4752]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:01 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:02 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:03 crc restorecon[4752]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:40:03 crc restorecon[4752]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 21:40:04 crc kubenswrapper[4979]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:40:04 crc kubenswrapper[4979]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 21:40:04 crc kubenswrapper[4979]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:40:04 crc kubenswrapper[4979]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:40:04 crc kubenswrapper[4979]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 21:40:04 crc kubenswrapper[4979]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.730285 4979 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734054 4979 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734079 4979 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734086 4979 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734092 4979 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734098 4979 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734103 4979 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734110 4979 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734115 4979 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734120 4979 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734126 4979 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734133 4979 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734138 4979 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734156 4979 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734163 4979 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734172 4979 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734179 4979 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734188 4979 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734194 4979 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734200 4979 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734205 4979 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734210 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734216 4979 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734221 4979 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734226 4979 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734230 4979 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734235 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734240 4979 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734246 4979 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734250 4979 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734254 4979 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734258 4979 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734262 4979 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734267 4979 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734271 4979 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734276 4979 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734280 4979 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734286 4979 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734291 4979 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734296 4979 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734301 4979 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734305 4979 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734311 4979 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734316 4979 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734323 4979 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734329 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734334 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734339 4979 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734344 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734349 4979 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734354 4979 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734358 4979 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734364 4979 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734368 4979 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734374 4979 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734379 4979 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734384 4979 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734389 4979 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734393 4979 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734398 4979 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734403 4979 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734407 4979 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734411 4979 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734417 4979 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734421 4979 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734425 4979 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734430 4979 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734434 4979 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734439 4979 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734444 4979 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734448 4979 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.734452 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734575 4979 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734588 4979 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734598 4979 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734605 4979 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734612 4979 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734620 4979 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734628 4979 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734637 4979 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734643 4979 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734650 4979 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734657 4979 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734664 4979 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734673 4979 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734679 4979 flags.go:64] FLAG: --cgroup-root="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734684 4979 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734689 4979 flags.go:64] FLAG: --client-ca-file="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734694 4979 flags.go:64] FLAG: --cloud-config="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734699 4979 flags.go:64] FLAG: --cloud-provider="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734705 4979 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734713 4979 flags.go:64] FLAG: --cluster-domain="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734718 4979 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734723 4979 flags.go:64] FLAG: --config-dir="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734729 4979 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734735 4979 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734743 4979 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734748 4979 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734754 4979 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734759 4979 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734765 4979 flags.go:64] FLAG: --contention-profiling="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734770 4979 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734775 4979 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734781 4979 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734786 4979 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734793 4979 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734800 4979 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734805 4979 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734811 4979 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734817 4979 flags.go:64] FLAG: --enable-server="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734822 4979 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734830 4979 flags.go:64] FLAG: --event-burst="100" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734836 4979 flags.go:64] FLAG: --event-qps="50" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734842 4979 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734847 4979 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734853 4979 flags.go:64] FLAG: --eviction-hard="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734860 4979 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734866 4979 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734871 4979 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734876 4979 flags.go:64] FLAG: --eviction-soft="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734882 4979 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734887 4979 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734893 4979 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734898 4979 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734903 4979 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734908 4979 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734913 4979 flags.go:64] FLAG: --feature-gates="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734920 4979 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734925 4979 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734931 4979 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734936 4979 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734941 4979 flags.go:64] FLAG: --healthz-port="10248" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734947 4979 flags.go:64] FLAG: --help="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734952 4979 flags.go:64] FLAG: --hostname-override="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734957 4979 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734962 4979 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734967 4979 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734972 4979 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734977 4979 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734982 4979 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734987 4979 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734992 4979 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.734997 4979 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735002 4979 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735009 4979 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735016 4979 flags.go:64] FLAG: --kube-reserved="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735023 4979 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735048 4979 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735055 4979 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735060 4979 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735065 4979 flags.go:64] FLAG: --lock-file="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735070 4979 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735075 4979 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735080 4979 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735088 4979 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735094 4979 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735099 4979 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735104 4979 flags.go:64] FLAG: --logging-format="text" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735109 4979 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735115 4979 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735119 4979 flags.go:64] FLAG: --manifest-url="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735124 4979 flags.go:64] FLAG: --manifest-url-header="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735131 4979 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735136 4979 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735143 4979 flags.go:64] FLAG: --max-pods="110" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735148 4979 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735153 4979 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735159 4979 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735163 4979 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735169 4979 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735175 4979 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735180 4979 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735193 4979 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735198 4979 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735205 4979 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735212 4979 flags.go:64] FLAG: --pod-cidr="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735221 4979 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735230 4979 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735236 4979 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735242 4979 flags.go:64] FLAG: --pods-per-core="0" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735247 4979 flags.go:64] FLAG: --port="10250" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735265 4979 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735272 4979 flags.go:64] FLAG: --provider-id="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735277 4979 flags.go:64] FLAG: --qos-reserved="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735283 4979 flags.go:64] FLAG: --read-only-port="10255" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735289 4979 flags.go:64] FLAG: --register-node="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735294 4979 flags.go:64] FLAG: --register-schedulable="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735299 4979 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735308 4979 flags.go:64] FLAG: --registry-burst="10" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735313 4979 flags.go:64] FLAG: --registry-qps="5" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735318 4979 flags.go:64] FLAG: --reserved-cpus="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735323 4979 flags.go:64] FLAG: --reserved-memory="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735329 4979 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735334 4979 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735340 4979 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735345 4979 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735350 4979 flags.go:64] FLAG: --runonce="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735355 4979 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735360 4979 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735366 4979 flags.go:64] FLAG: --seccomp-default="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735372 4979 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735377 4979 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735382 4979 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735388 4979 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735401 4979 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735406 4979 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735411 4979 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735417 4979 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735422 4979 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735428 4979 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735434 4979 flags.go:64] FLAG: --system-cgroups="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735439 4979 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735447 4979 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735452 4979 flags.go:64] FLAG: --tls-cert-file="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735457 4979 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735465 4979 flags.go:64] FLAG: --tls-min-version="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735470 4979 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735476 4979 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735481 4979 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735487 4979 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735493 4979 flags.go:64] FLAG: --v="2" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735500 4979 flags.go:64] FLAG: --version="false" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735507 4979 flags.go:64] FLAG: --vmodule="" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735513 4979 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.735519 4979 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735644 4979 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735652 4979 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735657 4979 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735663 4979 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735669 4979 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735674 4979 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735679 4979 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735683 4979 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735688 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735693 4979 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735698 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735706 4979 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735711 4979 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735717 4979 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735723 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735727 4979 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735734 4979 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735740 4979 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735746 4979 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735752 4979 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735758 4979 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735762 4979 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735767 4979 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735772 4979 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735777 4979 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735781 4979 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735786 4979 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735791 4979 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735797 4979 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735801 4979 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735805 4979 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735810 4979 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735814 4979 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735818 4979 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735824 4979 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735828 4979 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735832 4979 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735837 4979 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735841 4979 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735847 4979 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735852 4979 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735856 4979 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735861 4979 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735869 4979 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735873 4979 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735877 4979 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735882 4979 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735886 4979 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735892 4979 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735896 4979 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735901 4979 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735905 4979 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735909 4979 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735914 4979 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735918 4979 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735923 4979 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735928 4979 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735934 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735939 4979 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735943 4979 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735948 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735953 4979 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735959 4979 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735963 4979 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735969 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735974 4979 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735980 4979 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.735985 4979 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.736010 4979 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.736015 4979 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.736020 4979 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.736044 4979 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.759985 4979 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.760106 4979 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760204 4979 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760218 4979 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760226 4979 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760233 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760239 4979 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760245 4979 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760251 4979 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760256 4979 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760263 4979 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760271 4979 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760277 4979 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760282 4979 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760287 4979 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760293 4979 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760298 4979 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760303 4979 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760308 4979 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760313 4979 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760318 4979 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760323 4979 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760329 4979 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760334 4979 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760339 4979 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760344 4979 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760350 4979 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760357 4979 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760362 4979 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760369 4979 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760374 4979 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760379 4979 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760385 4979 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760390 4979 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760395 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760401 4979 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760406 4979 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760411 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760417 4979 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760423 4979 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760428 4979 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760433 4979 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760438 4979 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760443 4979 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760448 4979 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760453 4979 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760457 4979 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760462 4979 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760467 4979 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760471 4979 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760477 4979 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760481 4979 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760486 4979 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760492 4979 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760497 4979 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760504 4979 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760509 4979 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760514 4979 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760519 4979 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760524 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760528 4979 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760533 4979 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760539 4979 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760544 4979 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760548 4979 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760553 4979 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760558 4979 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760564 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760569 4979 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760574 4979 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760579 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760584 4979 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760589 4979 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.760599 4979 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760779 4979 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760789 4979 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760795 4979 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760801 4979 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760806 4979 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760812 4979 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760817 4979 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760823 4979 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760828 4979 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760833 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760838 4979 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760843 4979 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760848 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760854 4979 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760860 4979 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760865 4979 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760871 4979 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760876 4979 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760881 4979 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760886 4979 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760891 4979 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760897 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760902 4979 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760907 4979 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760913 4979 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760921 4979 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760927 4979 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760932 4979 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760937 4979 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760941 4979 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760946 4979 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760951 4979 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760956 4979 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760961 4979 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760965 4979 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760970 4979 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760975 4979 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760983 4979 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760989 4979 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760993 4979 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.760998 4979 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761003 4979 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761008 4979 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761013 4979 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761017 4979 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761024 4979 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761047 4979 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761053 4979 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761058 4979 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761064 4979 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761070 4979 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761076 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761082 4979 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761088 4979 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761094 4979 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761099 4979 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761104 4979 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761111 4979 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761119 4979 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761124 4979 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761129 4979 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761134 4979 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761139 4979 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761144 4979 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761149 4979 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761154 4979 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761158 4979 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761163 4979 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761168 4979 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761172 4979 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.761178 4979 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.761185 4979 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.762272 4979 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.771853 4979 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.772147 4979 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.778109 4979 server.go:997] "Starting client certificate rotation" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.778147 4979 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.781026 4979 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-24 11:02:13.494729903 +0000 UTC Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.781206 4979 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.829007 4979 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.832605 4979 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:40:04 crc kubenswrapper[4979]: E0130 21:40:04.834729 4979 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.869305 4979 log.go:25] "Validated CRI v1 runtime API" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.930862 4979 log.go:25] "Validated CRI v1 image API" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.937209 4979 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.944659 4979 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-21-35-02-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.944698 4979 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.968086 4979 manager.go:217] Machine: {Timestamp:2026-01-30 21:40:04.965463915 +0000 UTC m=+0.926710998 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e7905fc5-1d22-4ae8-ba0f-c56ed758748c BootID:f159bd6e-a7e4-4439-9f37-0bcc8094103f Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:28:54:aa Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:28:54:aa Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2f:86:8a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d4:fe:49 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b2:c5:84 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:cb:36:3a Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:b8:04:98 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:96:cc:e2:aa:58 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2a:9f:c9:a6:50:74 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.968602 4979 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.968891 4979 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.969861 4979 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.970154 4979 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.970210 4979 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.970506 4979 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.970523 4979 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.971235 4979 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.971287 4979 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.971585 4979 state_mem.go:36] "Initialized new in-memory state store" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.971736 4979 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.978040 4979 kubelet.go:418] "Attempting to sync node with API server" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.978070 4979 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.978097 4979 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.978112 4979 kubelet.go:324] "Adding apiserver pod source" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.978125 4979 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.986658 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:04 crc kubenswrapper[4979]: E0130 21:40:04.986746 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:04 crc kubenswrapper[4979]: W0130 21:40:04.987929 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:04 crc kubenswrapper[4979]: E0130 21:40:04.987974 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.991118 4979 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.992091 4979 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.993817 4979 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999552 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999582 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999591 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999599 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999612 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999619 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999627 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999641 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999650 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999660 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999732 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 21:40:04 crc kubenswrapper[4979]: I0130 21:40:04.999744 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.004537 4979 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.005224 4979 server.go:1280] "Started kubelet" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.005758 4979 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.005450 4979 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.006464 4979 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.006465 4979 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.007746 4979 server.go:460] "Adding debug handlers to kubelet server" Jan 30 21:40:05 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.009017 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.009190 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:51:35.497858146 +0000 UTC Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.009265 4979 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.009674 4979 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.017048 4979 factory.go:55] Registering systemd factory Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.017092 4979 factory.go:221] Registration of the systemd container factory successfully Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.017443 4979 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.017457 4979 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 21:40:05 crc kubenswrapper[4979]: W0130 21:40:05.017406 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.017517 4979 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.017515 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.018268 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="200ms" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.020207 4979 factory.go:153] Registering CRI-O factory Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.020250 4979 factory.go:221] Registration of the crio container factory successfully Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.020341 4979 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.020379 4979 factory.go:103] Registering Raw factory Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.020412 4979 manager.go:1196] Started watching for new ooms in manager Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.021174 4979 manager.go:319] Starting recovery of all containers Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.020607 4979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fa018588ce727 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:40:05.005182759 +0000 UTC m=+0.966429792,LastTimestamp:2026-01-30 21:40:05.005182759 +0000 UTC m=+0.966429792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031682 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031741 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031752 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031764 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031776 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031788 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031799 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031810 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031822 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031832 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031847 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031857 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031869 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031882 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031898 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031920 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031938 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031955 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031970 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031984 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.031998 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032015 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032048 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032062 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032119 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032133 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032149 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032162 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032171 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032180 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032223 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032235 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032275 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032285 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032296 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032307 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032317 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032327 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032339 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032353 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032366 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032377 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032389 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032402 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032415 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032428 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032441 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032453 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032471 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032484 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032497 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032510 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032527 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032539 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032552 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032564 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032575 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032588 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032601 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032612 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032657 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032671 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.032684 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.040826 4979 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.040903 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.040968 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.040990 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041008 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041064 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041083 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041100 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041119 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041135 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041152 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041171 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041187 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041202 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041217 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041240 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041254 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041285 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041300 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041318 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041334 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041350 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041365 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041381 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041396 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041412 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041426 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041442 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041458 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041473 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041488 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041505 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041520 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041539 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041555 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041570 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041585 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041602 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041617 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041631 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041647 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041662 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041687 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041706 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041725 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041742 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041790 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041808 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041826 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041844 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041858 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041875 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041892 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041906 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041922 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041936 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041953 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041968 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041981 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.041998 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042014 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042054 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042070 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042084 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042100 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042115 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042130 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042147 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042162 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042177 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042189 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042202 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042214 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042227 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042241 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042254 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042268 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042284 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042297 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042314 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042328 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042342 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042357 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042371 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042387 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042403 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042416 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042430 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042443 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042458 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042471 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042485 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042497 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042514 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042529 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042544 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042558 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042577 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042592 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042608 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042627 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042643 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042658 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042672 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042689 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042706 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042723 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042738 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042753 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042769 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042786 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042804 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042821 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042840 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042857 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042872 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042890 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042907 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042921 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042935 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042951 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042967 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.042987 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043003 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043018 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043054 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043070 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043093 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043107 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043127 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043142 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043155 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043171 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043187 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043198 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043213 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043226 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043241 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043253 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043269 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043281 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043294 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043324 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043338 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043351 4979 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043364 4979 reconstruct.go:97] "Volume reconstruction finished" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.043374 4979 reconciler.go:26] "Reconciler: start to sync state" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.045481 4979 manager.go:324] Recovery completed Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.059427 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.062188 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.062236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.062248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.063057 4979 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.063077 4979 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.063109 4979 state_mem.go:36] "Initialized new in-memory state store" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.064306 4979 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.066682 4979 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.068384 4979 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.068442 4979 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.068500 4979 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 21:40:05 crc kubenswrapper[4979]: W0130 21:40:05.075756 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.075870 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.076622 4979 policy_none.go:49] "None policy: Start" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.077741 4979 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.077842 4979 state_mem.go:35] "Initializing new in-memory state store" Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.115463 4979 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.151729 4979 manager.go:334] "Starting Device Plugin manager" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.151885 4979 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.151914 4979 server.go:79] "Starting device plugin registration server" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.153081 4979 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.153118 4979 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.153845 4979 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.153986 4979 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.153996 4979 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.160234 4979 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.169484 4979 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.169650 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.170893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.170935 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.170949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.171142 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.171636 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.171795 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.171958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.171996 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.172006 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.172183 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.172377 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.172450 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.172856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.172889 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.172900 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.173070 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.173170 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.173240 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174243 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174394 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174487 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174551 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174566 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174596 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174621 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174631 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174591 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174658 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.174598 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.175464 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.175519 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.175537 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.175544 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.175577 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.175587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.175787 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.175828 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.177148 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.177189 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.177207 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.219896 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="400ms" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245692 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245735 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245762 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245786 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245804 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245825 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245844 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245865 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245900 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245918 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245935 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245952 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245970 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.245987 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.253435 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.254689 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.254727 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.254741 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.254771 4979 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.255521 4979 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348099 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348211 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348234 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348254 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348273 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348292 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348310 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348327 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348348 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348375 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348394 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348412 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348429 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348444 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348461 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.348947 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349067 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349099 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349124 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349149 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349175 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349198 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349225 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349211 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349270 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349249 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349296 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349321 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349332 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.349345 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.456101 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.457630 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.457682 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.457694 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.457729 4979 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.458226 4979 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.506696 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.524119 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.531754 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.553774 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.560320 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:05 crc kubenswrapper[4979]: W0130 21:40:05.612757 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ca69cb7cc3123572357afddd8b302eadfbbd8b1a5696f304f3421a43f9c3fdcc WatchSource:0}: Error finding container ca69cb7cc3123572357afddd8b302eadfbbd8b1a5696f304f3421a43f9c3fdcc: Status 404 returned error can't find the container with id ca69cb7cc3123572357afddd8b302eadfbbd8b1a5696f304f3421a43f9c3fdcc Jan 30 21:40:05 crc kubenswrapper[4979]: W0130 21:40:05.613238 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ec5b27638abbe4be6335675756da51147eb3cef14b60bf93a0004b4c4a5c60aa WatchSource:0}: Error finding container ec5b27638abbe4be6335675756da51147eb3cef14b60bf93a0004b4c4a5c60aa: Status 404 returned error can't find the container with id ec5b27638abbe4be6335675756da51147eb3cef14b60bf93a0004b4c4a5c60aa Jan 30 21:40:05 crc kubenswrapper[4979]: W0130 21:40:05.615803 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-5fe82f2b68b11bb1e993e3e89af62e15ecf2ff94899dc9b6eaab9aead32527e9 WatchSource:0}: Error finding container 5fe82f2b68b11bb1e993e3e89af62e15ecf2ff94899dc9b6eaab9aead32527e9: Status 404 returned error can't find the container with id 5fe82f2b68b11bb1e993e3e89af62e15ecf2ff94899dc9b6eaab9aead32527e9 Jan 30 21:40:05 crc kubenswrapper[4979]: W0130 21:40:05.616739 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6822dcaee29f50a8f9b85c3d4680cfa89762c2c10104bcf1b28d32c985f0fb1c WatchSource:0}: Error finding container 6822dcaee29f50a8f9b85c3d4680cfa89762c2c10104bcf1b28d32c985f0fb1c: Status 404 returned error can't find the container with id 6822dcaee29f50a8f9b85c3d4680cfa89762c2c10104bcf1b28d32c985f0fb1c Jan 30 21:40:05 crc kubenswrapper[4979]: W0130 21:40:05.617473 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-cd67e8b497474de7829303dad0d4241c83c2aa28818823aae6b78e968f8cfe9f WatchSource:0}: Error finding container cd67e8b497474de7829303dad0d4241c83c2aa28818823aae6b78e968f8cfe9f: Status 404 returned error can't find the container with id cd67e8b497474de7829303dad0d4241c83c2aa28818823aae6b78e968f8cfe9f Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.620981 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="800ms" Jan 30 21:40:05 crc kubenswrapper[4979]: W0130 21:40:05.827246 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.827395 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.858631 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.860494 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.860577 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.860589 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:05 crc kubenswrapper[4979]: I0130 21:40:05.860623 4979 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.861251 4979 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Jan 30 21:40:05 crc kubenswrapper[4979]: W0130 21:40:05.886791 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:05 crc kubenswrapper[4979]: E0130 21:40:05.886932 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.007926 4979 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.009948 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:58:36.799394988 +0000 UTC Jan 30 21:40:06 crc kubenswrapper[4979]: W0130 21:40:06.035445 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:06 crc kubenswrapper[4979]: E0130 21:40:06.035576 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.081715 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd67e8b497474de7829303dad0d4241c83c2aa28818823aae6b78e968f8cfe9f"} Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.083001 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca69cb7cc3123572357afddd8b302eadfbbd8b1a5696f304f3421a43f9c3fdcc"} Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.084316 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6822dcaee29f50a8f9b85c3d4680cfa89762c2c10104bcf1b28d32c985f0fb1c"} Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.085773 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5fe82f2b68b11bb1e993e3e89af62e15ecf2ff94899dc9b6eaab9aead32527e9"} Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.087058 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ec5b27638abbe4be6335675756da51147eb3cef14b60bf93a0004b4c4a5c60aa"} Jan 30 21:40:06 crc kubenswrapper[4979]: E0130 21:40:06.422902 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="1.6s" Jan 30 21:40:06 crc kubenswrapper[4979]: W0130 21:40:06.458759 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:06 crc kubenswrapper[4979]: E0130 21:40:06.459082 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.661335 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.662697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.662755 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.662767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.662800 4979 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:40:06 crc kubenswrapper[4979]: E0130 21:40:06.663432 4979 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Jan 30 21:40:06 crc kubenswrapper[4979]: I0130 21:40:06.984495 4979 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:40:06 crc kubenswrapper[4979]: E0130 21:40:06.985599 4979 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.007828 4979 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.010894 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:07:36.453473029 +0000 UTC Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.093533 4979 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2" exitCode=0 Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.093654 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.093612 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2"} Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.094869 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.094896 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.094906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.097411 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42"} Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.101183 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486" exitCode=0 Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.101275 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486"} Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.101321 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.103195 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.103355 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.103497 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.103397 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"154044c76d2b4f536328a478fc5be50868e48dab767e166901417e0db5f2934b"} Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.103527 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.103360 4979 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="154044c76d2b4f536328a478fc5be50868e48dab767e166901417e0db5f2934b" exitCode=0 Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.105144 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.105179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.105190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.106518 4979 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db" exitCode=0 Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.106550 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db"} Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.106646 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.107719 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.107748 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.107758 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.108358 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.109343 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.109377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:07 crc kubenswrapper[4979]: I0130 21:40:07.109394 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:07 crc kubenswrapper[4979]: W0130 21:40:07.856588 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:07 crc kubenswrapper[4979]: E0130 21:40:07.856704 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.008175 4979 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.011316 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:01:52.153279109 +0000 UTC Jan 30 21:40:08 crc kubenswrapper[4979]: E0130 21:40:08.023657 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="3.2s" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.111272 4979 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fcb17e96880da17d1fb6d0445d30d19ef6b4332ef41b4db7f39f5ce44f296ba1" exitCode=0 Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.111354 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fcb17e96880da17d1fb6d0445d30d19ef6b4332ef41b4db7f39f5ce44f296ba1"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.111412 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.112733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.112772 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.112785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.113831 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.113854 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.115426 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.115454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.115467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.118152 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.118174 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.118188 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.118290 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.119925 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.119970 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.119982 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.123271 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.123314 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.123331 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.123415 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.124373 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.124399 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.124411 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.126586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.126620 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.126637 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.126652 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636"} Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.264078 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.265456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.265518 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.265534 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:08 crc kubenswrapper[4979]: I0130 21:40:08.265571 4979 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:40:08 crc kubenswrapper[4979]: E0130 21:40:08.266239 4979 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.143:6443: connect: connection refused" node="crc" Jan 30 21:40:08 crc kubenswrapper[4979]: W0130 21:40:08.435189 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:08 crc kubenswrapper[4979]: E0130 21:40:08.435296 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:08 crc kubenswrapper[4979]: W0130 21:40:08.567842 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:08 crc kubenswrapper[4979]: E0130 21:40:08.567970 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.008498 4979 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:09 crc kubenswrapper[4979]: W0130 21:40:09.008652 4979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.143:6443: connect: connection refused Jan 30 21:40:09 crc kubenswrapper[4979]: E0130 21:40:09.008750 4979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.143:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.012272 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:25:05.515752234 +0000 UTC Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.143222 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd"} Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.143527 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.145051 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.145110 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.145125 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.146261 4979 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0687d3dfb5d4c049aa43cbd73fa840150185b59af1c23e0a6bfa7ac4737923e0" exitCode=0 Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.146377 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.146473 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.146503 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.146545 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.146900 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0687d3dfb5d4c049aa43cbd73fa840150185b59af1c23e0a6bfa7ac4737923e0"} Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.146937 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.147660 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.147687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.147697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.147718 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.147732 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.147739 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.147929 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.147981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.147995 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.148169 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.148194 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.148205 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:09 crc kubenswrapper[4979]: I0130 21:40:09.568155 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.012713 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:01:41.734636055 +0000 UTC Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.061773 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.153526 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.155737 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd" exitCode=255 Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.155879 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd"} Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.155927 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.157285 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.157337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.157353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.158188 4979 scope.go:117] "RemoveContainer" containerID="aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.160481 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d7638a099f16ed710742d565620ad1bdf91d2731a4b63045ffdf969be0c8707"} Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.160508 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"49a71cdf3ce0c44f73138e3fb3fc78692590e715be029d9c8fbd2374f67e090e"} Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.160527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"061dc510bae801e67e951fab4f006bf82ac167eaf939674ff9d012fe5776ae4f"} Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.160529 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.161365 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.161389 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:10 crc kubenswrapper[4979]: I0130 21:40:10.161400 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.013624 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:13:48.171227251 +0000 UTC Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.171995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1e58f2f254217e904de46531d5851c531deb37696449bd26415e0a39a0181abf"} Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.172074 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"039eac6e497a228726dd01a7e0159c08d917946af5ef633cda0f6c8ab4312a4a"} Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.172161 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.173578 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.173614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.173625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.174287 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.176089 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978"} Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.176193 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.176294 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.177071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.177109 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.177163 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.339697 4979 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.466387 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.467883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.467932 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.467945 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.467979 4979 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.807290 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.807529 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.808807 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.808874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.808891 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:11 crc kubenswrapper[4979]: I0130 21:40:11.815508 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.014653 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:03:19.710825796 +0000 UTC Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.178321 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.178421 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.178450 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.178571 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.178867 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.179659 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.179699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.179713 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.179759 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.179778 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.179790 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.180747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.180824 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.180845 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.540840 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:12 crc kubenswrapper[4979]: I0130 21:40:12.771409 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.015517 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:29:16.958887643 +0000 UTC Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.180426 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.180528 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.180528 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.181950 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.181967 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.181989 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.181997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.182015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.182001 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.182156 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.182186 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.182197 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:13 crc kubenswrapper[4979]: I0130 21:40:13.356184 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.007375 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.016638 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:28:25.310251786 +0000 UTC Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.184700 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.184783 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.184799 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.186666 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.186689 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.186710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.186740 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.186775 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.186758 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.186955 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.186986 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:14 crc kubenswrapper[4979]: I0130 21:40:14.187007 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:15 crc kubenswrapper[4979]: I0130 21:40:15.017562 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:09:37.66167815 +0000 UTC Jan 30 21:40:15 crc kubenswrapper[4979]: E0130 21:40:15.160679 4979 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 21:40:16 crc kubenswrapper[4979]: I0130 21:40:16.018450 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:59:54.099866012 +0000 UTC Jan 30 21:40:16 crc kubenswrapper[4979]: I0130 21:40:16.715494 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:16 crc kubenswrapper[4979]: I0130 21:40:16.715887 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:16 crc kubenswrapper[4979]: I0130 21:40:16.718769 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:16 crc kubenswrapper[4979]: I0130 21:40:16.718841 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:16 crc kubenswrapper[4979]: I0130 21:40:16.718862 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:16 crc kubenswrapper[4979]: I0130 21:40:16.721676 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:17 crc kubenswrapper[4979]: I0130 21:40:17.019482 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:35:20.668010041 +0000 UTC Jan 30 21:40:17 crc kubenswrapper[4979]: I0130 21:40:17.193804 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:17 crc kubenswrapper[4979]: I0130 21:40:17.195143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:17 crc kubenswrapper[4979]: I0130 21:40:17.195203 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:17 crc kubenswrapper[4979]: I0130 21:40:17.195221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:18 crc kubenswrapper[4979]: I0130 21:40:18.019728 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:05:07.236086691 +0000 UTC Jan 30 21:40:19 crc kubenswrapper[4979]: I0130 21:40:19.020588 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:02:08.080118114 +0000 UTC Jan 30 21:40:19 crc kubenswrapper[4979]: I0130 21:40:19.715501 4979 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:40:19 crc kubenswrapper[4979]: I0130 21:40:19.715599 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:40:19 crc kubenswrapper[4979]: E0130 21:40:19.768405 4979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188fa018588ce727 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:40:05.005182759 +0000 UTC m=+0.966429792,LastTimestamp:2026-01-30 21:40:05.005182759 +0000 UTC m=+0.966429792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:40:19 crc kubenswrapper[4979]: I0130 21:40:19.832005 4979 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 21:40:19 crc kubenswrapper[4979]: I0130 21:40:19.832127 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 21:40:19 crc kubenswrapper[4979]: I0130 21:40:19.839535 4979 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 21:40:19 crc kubenswrapper[4979]: I0130 21:40:19.839623 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 21:40:20 crc kubenswrapper[4979]: I0130 21:40:20.021355 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:55:59.939572127 +0000 UTC Jan 30 21:40:20 crc kubenswrapper[4979]: I0130 21:40:20.067015 4979 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]log ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]etcd ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/priority-and-fairness-filter ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/start-apiextensions-informers ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/start-apiextensions-controllers ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/crd-informer-synced ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/start-system-namespaces-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 30 21:40:20 crc kubenswrapper[4979]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 30 21:40:20 crc kubenswrapper[4979]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/bootstrap-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/start-kube-aggregator-informers ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/apiservice-registration-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/apiservice-discovery-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]autoregister-completion ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/apiservice-openapi-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 30 21:40:20 crc kubenswrapper[4979]: livez check failed Jan 30 21:40:20 crc kubenswrapper[4979]: I0130 21:40:20.067104 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:40:21 crc kubenswrapper[4979]: I0130 21:40:21.021480 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:12:42.247928307 +0000 UTC Jan 30 21:40:22 crc kubenswrapper[4979]: I0130 21:40:22.022587 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:07:42.075755791 +0000 UTC Jan 30 21:40:23 crc kubenswrapper[4979]: I0130 21:40:23.022751 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:45:25.900764399 +0000 UTC Jan 30 21:40:23 crc kubenswrapper[4979]: I0130 21:40:23.381842 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 21:40:23 crc kubenswrapper[4979]: I0130 21:40:23.382421 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:23 crc kubenswrapper[4979]: I0130 21:40:23.384458 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:23 crc kubenswrapper[4979]: I0130 21:40:23.384863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:23 crc kubenswrapper[4979]: I0130 21:40:23.384984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:23 crc kubenswrapper[4979]: I0130 21:40:23.396241 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.023308 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:38:14.396124275 +0000 UTC Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.212458 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.213764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.213816 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.213828 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:24 crc kubenswrapper[4979]: E0130 21:40:24.824562 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.827727 4979 trace.go:236] Trace[554390551]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:40:13.709) (total time: 11118ms): Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[554390551]: ---"Objects listed" error: 11118ms (21:40:24.827) Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[554390551]: [11.118129414s] [11.118129414s] END Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.827787 4979 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:40:24 crc kubenswrapper[4979]: E0130 21:40:24.828960 4979 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.829805 4979 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.831362 4979 trace.go:236] Trace[1187413104]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:40:14.597) (total time: 10233ms): Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[1187413104]: ---"Objects listed" error: 10233ms (21:40:24.831) Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[1187413104]: [10.233688728s] [10.233688728s] END Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.831403 4979 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.832108 4979 trace.go:236] Trace[1433974907]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:40:12.361) (total time: 12470ms): Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[1433974907]: ---"Objects listed" error: 12470ms (21:40:24.832) Jan 30 21:40:24 crc kubenswrapper[4979]: Trace[1433974907]: [12.470825732s] [12.470825732s] END Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.832311 4979 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.832962 4979 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.834351 4979 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.857075 4979 csr.go:261] certificate signing request csr-kq9nt is approved, waiting to be issued Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.865825 4979 csr.go:257] certificate signing request csr-kq9nt is issued Jan 30 21:40:24 crc kubenswrapper[4979]: I0130 21:40:24.989491 4979 apiserver.go:52] "Watching apiserver" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.001697 4979 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.002020 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.002480 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.002488 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.002557 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.004088 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.004211 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.004234 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.004662 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.004781 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.004877 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.008491 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.008510 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.009770 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.010670 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024018 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024041 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:10:42.862776223 +0000 UTC Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024359 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024372 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024606 4979 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.024670 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.032822 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034388 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034417 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034436 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034454 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034469 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034485 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034499 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034515 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034530 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034547 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034576 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034591 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034608 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034625 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034640 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034657 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034674 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034691 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034708 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034741 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034758 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034775 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034793 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034818 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034837 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034868 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034885 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034909 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034926 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034947 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034966 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034983 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.034998 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035022 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035060 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035078 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035093 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035110 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035126 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035141 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035156 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035172 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035189 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035206 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035225 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035242 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035258 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035273 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035289 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035306 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035321 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035335 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035371 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035385 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035402 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035417 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035432 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035449 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035464 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035481 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035497 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035512 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035527 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035542 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035564 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035581 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035598 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035618 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035638 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035654 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035673 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035690 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035706 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035722 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035739 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035755 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035770 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035785 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035800 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035814 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035830 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035846 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035860 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035876 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035910 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035924 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035942 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035958 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035974 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.035991 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036008 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036044 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036063 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036081 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036099 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036118 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036134 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036151 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036168 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036185 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036201 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036219 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036236 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036252 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036286 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036302 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036319 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036336 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036352 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036369 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036388 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036403 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036419 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036435 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036451 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036467 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036483 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036501 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036521 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036539 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036555 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036571 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036588 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036605 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036702 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036728 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036746 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036764 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036781 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036797 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036813 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036829 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036844 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036860 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036882 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036898 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036914 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036932 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036948 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036963 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036979 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.036995 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.037010 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.037288 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.037314 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.037659 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.037936 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038173 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038206 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038223 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038241 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038264 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038285 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038302 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038323 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038340 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038358 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038375 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038391 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038408 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038424 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038440 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038456 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038472 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038487 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038504 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038522 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038540 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038557 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038573 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038589 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038618 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038637 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038639 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038655 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038674 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038691 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038707 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038742 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038759 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038775 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038792 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038809 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038827 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038844 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038859 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038876 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038893 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038911 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038927 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038945 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.038961 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039011 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039070 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039104 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039357 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039167 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039668 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039708 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039739 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039773 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039804 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039836 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039864 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039915 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039946 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.039976 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040002 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040057 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040089 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040165 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040186 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040201 4979 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.040273 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.040342 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:25.540317706 +0000 UTC m=+21.501564739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040566 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040800 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.040936 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041468 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041530 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041473 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041705 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041361 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041757 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.041881 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.042144 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.042287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.042587 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.042778 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.044581 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.044769 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.045289 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.045356 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.045614 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.045847 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.046135 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.046329 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.046418 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.046769 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.046872 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.047175 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.047458 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.047479 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.047691 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.047966 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048008 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048235 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048337 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048494 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048565 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.048749 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049160 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049212 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049222 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049541 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049827 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.049838 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.049831 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.049938 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:25.549916625 +0000 UTC m=+21.511163658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.050446 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.050435 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.050860 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.050935 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051133 4979 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051203 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051598 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051713 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051946 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.051979 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.052584 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.053378 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.053491 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.053637 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054019 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054140 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054165 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054813 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054909 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.054948 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.062229 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.062509 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.062743 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.063109 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.063120 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.063407 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.063738 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.064207 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.064471 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.064508 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.066028 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.067264 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.067915 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.068541 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.068601 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.068790 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.068958 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.069494 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.072258 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.072514 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.072728 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.072983 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.073618 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.073719 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.074174 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.074342 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.074524 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.074782 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.075047 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.075073 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.075216 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:25.575185276 +0000 UTC m=+21.536432499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.075437 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.077522 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.077786 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.077907 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.078386 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.078626 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.078766 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.078991 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.079246 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.079319 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.079678 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.079471 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.079610 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.081357 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.082544 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.082576 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.082591 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.082657 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:25.582634197 +0000 UTC m=+21.543881220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.082769 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.084958 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.085561 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.085916 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086292 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086221 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086507 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086532 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086633 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086639 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086759 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086960 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086968 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.087430 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.087436 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.087660 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.087953 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088010 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088130 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088311 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088457 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088780 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.088802 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.089062 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.089206 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.089423 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.090243 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.086089 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.090680 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.090766 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.091612 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.092457 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.092834 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.092933 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.094860 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095079 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095265 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095479 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095564 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095846 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.095896 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.096252 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.096798 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.096917 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.099304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.087248 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.099821 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.100155 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.100224 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.100356 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.100995 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.101613 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.103069 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.103916 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.103941 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.104268 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.104551 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.105402 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.106485 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.107102 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.107342 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.107366 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108167 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108219 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108335 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.108355 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:25.607431576 +0000 UTC m=+21.568678609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108331 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108565 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.108983 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.110852 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.111129 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.112729 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.113397 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.114606 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.116258 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.116835 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.117222 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.118118 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.118872 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.118866 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.119137 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.119219 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.119622 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.119683 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.120088 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.120490 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.121692 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.121779 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.121859 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.122170 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.122241 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.122391 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.124790 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.127534 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.127749 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.130482 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.133758 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.137405 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.138231 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.138479 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.138901 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.139716 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.141146 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.141717 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.142326 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.142666 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143046 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143123 4979 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143154 4979 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143185 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143201 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143220 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143238 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143253 4979 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143270 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143284 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143298 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143310 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143213 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143361 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143379 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143399 4979 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143412 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143424 4979 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143435 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143454 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143468 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143482 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143506 4979 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143524 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143536 4979 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143548 4979 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143567 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143585 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143598 4979 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143611 4979 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143626 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143638 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143650 4979 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143662 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143676 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143687 4979 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143698 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143712 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143728 4979 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143740 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143752 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143768 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143779 4979 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143791 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143821 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143837 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143854 4979 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143892 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143904 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143918 4979 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143930 4979 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143942 4979 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143953 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143980 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.143994 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144008 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144024 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144058 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144074 4979 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144087 4979 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144116 4979 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144152 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144165 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144177 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144194 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144208 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144221 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144237 4979 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144250 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144263 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144275 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144312 4979 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144328 4979 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144342 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144359 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144376 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144390 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144403 4979 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144415 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144431 4979 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144444 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144456 4979 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144476 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144490 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144501 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144513 4979 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144532 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144544 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144555 4979 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144567 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144583 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144597 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144658 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144677 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144690 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144705 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144718 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144734 4979 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144746 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144760 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144774 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144791 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144805 4979 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144817 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144829 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144844 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144856 4979 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144867 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144884 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144896 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144908 4979 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144947 4979 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144965 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144979 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.144993 4979 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145007 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145023 4979 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145057 4979 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145072 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145091 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145105 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145119 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145133 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145149 4979 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145162 4979 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145176 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145189 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145208 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145242 4979 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145258 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145270 4979 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145288 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145301 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145314 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145332 4979 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145344 4979 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145358 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145371 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145398 4979 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145412 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145425 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145438 4979 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145455 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145469 4979 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145484 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145499 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145513 4979 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145526 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145538 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145566 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145578 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145589 4979 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145601 4979 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145617 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145632 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145645 4979 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145661 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145673 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145686 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145700 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145736 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145751 4979 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145764 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145779 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145795 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145808 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145822 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145836 4979 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145853 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145876 4979 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145891 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145907 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145921 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145934 4979 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145947 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145964 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145978 4979 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.145992 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.146006 4979 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.138418 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.148270 4979 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.148312 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.148326 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.148340 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.150454 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151139 4979 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151176 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151187 4979 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151198 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151213 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.151215 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.152057 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.153347 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.153729 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.155095 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.156384 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.158333 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.160841 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.162673 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.163814 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.164737 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.166243 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.166904 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.167187 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.168720 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.169090 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.170241 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.170906 4979 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.171679 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.174736 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.175387 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.175947 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.177899 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.179095 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.179736 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.180898 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.181819 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.182800 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.183007 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.183674 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.185190 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.186326 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.186943 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.187642 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.188665 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.189581 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.190960 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.191592 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.192878 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.193535 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.194299 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.195262 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.195557 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.196086 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.201076 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.208179 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.220310 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.235104 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.247082 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.252223 4979 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.252261 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.252271 4979 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.252281 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.256886 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.269546 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.281052 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.293399 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.305082 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.322374 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.342860 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.366441 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.388385 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.448944 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.462774 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.475594 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.488358 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.502512 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.515470 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.532841 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.555885 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.555968 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.556008 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.556072 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:26.556049835 +0000 UTC m=+22.517296868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.556151 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.556210 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:26.556197638 +0000 UTC m=+22.517444671 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.578288 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.657442 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.657537 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.657569 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657701 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657728 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657739 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657848 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:26.657710247 +0000 UTC m=+22.618957280 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657917 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:26.657906232 +0000 UTC m=+22.619153265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657908 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657971 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.657991 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: E0130 21:40:25.658103 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:26.658076116 +0000 UTC m=+22.619323319 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.867195 4979 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 21:35:24 +0000 UTC, rotation deadline is 2026-11-07 08:35:25.253961084 +0000 UTC Jan 30 21:40:25 crc kubenswrapper[4979]: I0130 21:40:25.867288 4979 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6730h54m59.386675896s for next certificate rotation Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.025088 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:18:26.718348035 +0000 UTC Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.219504 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.219564 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"33c38ed5a670798ba0108c80b24ba1dbac83bfc637d1afc0476a86ce5f3037e2"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.222166 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.222227 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.222240 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0034876ce8c1f15d39ab53cac1d8ecd7f0ca27691a6438d9e37b78544eafb308"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.223789 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"801a5a05057a522df7aae470fe16721dc47b25237887153178ded5b7952d2ec1"} Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.232160 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.232513 4979 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.247051 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.257111 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.265660 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.276846 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.289023 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.299070 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.313793 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.324607 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.333737 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kqsqg"] Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.334138 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-p8nz9"] Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.334295 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.334413 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: W0130 21:40:26.336062 4979 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.336128 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.337013 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.337259 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.338437 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.338498 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.338656 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.338918 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.339190 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.340871 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.354979 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363814 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28767351-ec5c-4f9e-8b01-2954eaf4ea30-mcd-auth-proxy-config\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363849 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkhmw\" (UniqueName: \"kubernetes.io/projected/01c7f257-42d4-4934-805e-7f5d80988fa3-kube-api-access-lkhmw\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363872 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28767351-ec5c-4f9e-8b01-2954eaf4ea30-rootfs\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363888 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdwj\" (UniqueName: \"kubernetes.io/projected/28767351-ec5c-4f9e-8b01-2954eaf4ea30-kube-api-access-8zdwj\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363912 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28767351-ec5c-4f9e-8b01-2954eaf4ea30-proxy-tls\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.363928 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/01c7f257-42d4-4934-805e-7f5d80988fa3-hosts-file\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.372830 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.388499 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.405734 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.423903 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.437703 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.451349 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464076 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464552 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28767351-ec5c-4f9e-8b01-2954eaf4ea30-proxy-tls\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464597 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/01c7f257-42d4-4934-805e-7f5d80988fa3-hosts-file\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464638 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28767351-ec5c-4f9e-8b01-2954eaf4ea30-mcd-auth-proxy-config\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkhmw\" (UniqueName: \"kubernetes.io/projected/01c7f257-42d4-4934-805e-7f5d80988fa3-kube-api-access-lkhmw\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464673 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28767351-ec5c-4f9e-8b01-2954eaf4ea30-rootfs\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464688 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zdwj\" (UniqueName: \"kubernetes.io/projected/28767351-ec5c-4f9e-8b01-2954eaf4ea30-kube-api-access-8zdwj\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464709 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/01c7f257-42d4-4934-805e-7f5d80988fa3-hosts-file\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.464776 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28767351-ec5c-4f9e-8b01-2954eaf4ea30-rootfs\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.465701 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28767351-ec5c-4f9e-8b01-2954eaf4ea30-mcd-auth-proxy-config\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.478152 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.482328 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkhmw\" (UniqueName: \"kubernetes.io/projected/01c7f257-42d4-4934-805e-7f5d80988fa3-kube-api-access-lkhmw\") pod \"node-resolver-p8nz9\" (UID: \"01c7f257-42d4-4934-805e-7f5d80988fa3\") " pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.482971 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zdwj\" (UniqueName: \"kubernetes.io/projected/28767351-ec5c-4f9e-8b01-2954eaf4ea30-kube-api-access-8zdwj\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.494940 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.508657 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.521868 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.533409 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.565606 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.565661 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.565788 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.565807 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.565862 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:28.565838518 +0000 UTC m=+24.527085551 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.565905 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:28.565881199 +0000 UTC m=+24.527128232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.659667 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p8nz9" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.666072 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.666204 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.666244 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666406 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666435 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666449 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666512 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:28.666490453 +0000 UTC m=+24.627737486 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666582 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:28.666574235 +0000 UTC m=+24.627821268 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666638 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666650 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666659 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:26 crc kubenswrapper[4979]: E0130 21:40:26.666686 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:28.666678638 +0000 UTC m=+24.627925671 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:26 crc kubenswrapper[4979]: W0130 21:40:26.674729 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01c7f257_42d4_4934_805e_7f5d80988fa3.slice/crio-cffd86a4e08153a8961c0767cf5db41ee6c11c5380077996c614558f8fc05a9d WatchSource:0}: Error finding container cffd86a4e08153a8961c0767cf5db41ee6c11c5380077996c614558f8fc05a9d: Status 404 returned error can't find the container with id cffd86a4e08153a8961c0767cf5db41ee6c11c5380077996c614558f8fc05a9d Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.717299 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-75j89"] Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.718062 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.719868 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xh5mg"] Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.720173 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.720462 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.720660 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.721142 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.721315 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.721400 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.721493 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.721944 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.727223 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.731326 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.732363 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.738048 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.750463 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766630 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766709 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-netns\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766742 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-etc-kubernetes\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766763 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-os-release\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-daemon-config\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766818 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-multus-certs\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766890 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gr57\" (UniqueName: \"kubernetes.io/projected/6722e8df-a635-4808-b6b9-d5633fc3d34b-kube-api-access-8gr57\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766942 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-k8s-cni-cncf-io\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.766986 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767043 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-system-cni-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767072 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-hostroot\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767091 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767116 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-cnibin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767134 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-bin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767152 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-os-release\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767172 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-socket-dir-parent\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767188 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-multus\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767220 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767238 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-cni-binary-copy\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767254 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-kubelet\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767274 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-system-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767289 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-conf-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767307 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m46p\" (UniqueName: \"kubernetes.io/projected/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-kube-api-access-6m46p\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767358 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cnibin\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.767385 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-binary-copy\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.780563 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.795350 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.811271 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.826912 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.841809 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.858016 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869049 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-system-cni-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869235 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-system-cni-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869120 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869252 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-hostroot\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869372 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869452 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-cnibin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869498 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-bin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869521 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-os-release\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869542 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-socket-dir-parent\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869547 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-cnibin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869565 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-multus\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869600 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-multus\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869638 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-kubelet\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869671 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869685 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-socket-dir-parent\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869693 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-cni-binary-copy\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869724 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-system-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869749 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-conf-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869755 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-kubelet\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869778 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m46p\" (UniqueName: \"kubernetes.io/projected/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-kube-api-access-6m46p\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869809 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cnibin\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869831 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-binary-copy\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869869 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-netns\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869895 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-etc-kubernetes\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869918 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-os-release\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869945 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870008 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-daemon-config\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870013 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-os-release\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870051 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-multus-certs\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869722 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-var-lib-cni-bin\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870084 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gr57\" (UniqueName: \"kubernetes.io/projected/6722e8df-a635-4808-b6b9-d5633fc3d34b-kube-api-access-8gr57\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870116 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-conf-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870121 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870170 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-k8s-cni-cncf-io\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870240 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-k8s-cni-cncf-io\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870516 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-etc-kubernetes\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870547 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-netns\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.869808 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-system-cni-dir\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870017 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-os-release\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870599 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-host-run-multus-certs\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870646 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cnibin\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.870858 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.871478 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6722e8df-a635-4808-b6b9-d5633fc3d34b-hostroot\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.872208 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.872418 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-cni-binary-copy\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.872438 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6722e8df-a635-4808-b6b9-d5633fc3d34b-multus-daemon-config\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.873006 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-cni-binary-copy\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.886566 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.889106 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gr57\" (UniqueName: \"kubernetes.io/projected/6722e8df-a635-4808-b6b9-d5633fc3d34b-kube-api-access-8gr57\") pod \"multus-xh5mg\" (UID: \"6722e8df-a635-4808-b6b9-d5633fc3d34b\") " pod="openshift-multus/multus-xh5mg" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.889378 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m46p\" (UniqueName: \"kubernetes.io/projected/f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e-kube-api-access-6m46p\") pod \"multus-additional-cni-plugins-75j89\" (UID: \"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\") " pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.900865 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.914121 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.928877 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.944826 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.959930 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.973945 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:26 crc kubenswrapper[4979]: I0130 21:40:26.991592 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:26Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.012514 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.025272 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:49:12.020635742 +0000 UTC Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.031519 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.045001 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.055860 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-75j89" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.056466 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.062886 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xh5mg" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.070281 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.070298 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.070300 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:27 crc kubenswrapper[4979]: E0130 21:40:27.070417 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:27 crc kubenswrapper[4979]: E0130 21:40:27.070507 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:27 crc kubenswrapper[4979]: E0130 21:40:27.070599 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.076622 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.077531 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.078774 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.079454 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.080121 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.081153 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.081833 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 21:40:27 crc kubenswrapper[4979]: W0130 21:40:27.098122 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6722e8df_a635_4808_b6b9_d5633fc3d34b.slice/crio-6e73d5e131efd71a3d438196d7ac9fc9be13e317bd7b6255735e2a7b9280e9ff WatchSource:0}: Error finding container 6e73d5e131efd71a3d438196d7ac9fc9be13e317bd7b6255735e2a7b9280e9ff: Status 404 returned error can't find the container with id 6e73d5e131efd71a3d438196d7ac9fc9be13e317bd7b6255735e2a7b9280e9ff Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.112074 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jttsv"] Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.113022 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.115897 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.116225 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.129072 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.130009 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.130009 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.130205 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.136058 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.161487 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174858 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174900 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174922 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174937 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174956 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174975 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.174993 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175008 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175043 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175061 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gg6r\" (UniqueName: \"kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175077 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175092 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175107 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175221 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175256 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175277 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175293 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175308 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175328 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.175361 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.191170 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.212705 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.228918 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerStarted","Data":"5638f5d00b204f802db25c86ead6d0695eac9f3235ca33932926822665e620ff"} Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.228913 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.232113 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p8nz9" event={"ID":"01c7f257-42d4-4934-805e-7f5d80988fa3","Type":"ContainerStarted","Data":"d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa"} Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.232160 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p8nz9" event={"ID":"01c7f257-42d4-4934-805e-7f5d80988fa3","Type":"ContainerStarted","Data":"cffd86a4e08153a8961c0767cf5db41ee6c11c5380077996c614558f8fc05a9d"} Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.234652 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerStarted","Data":"6e73d5e131efd71a3d438196d7ac9fc9be13e317bd7b6255735e2a7b9280e9ff"} Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.243200 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.257252 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276198 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276250 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276268 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276286 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276316 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276315 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276331 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276348 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276367 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276379 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276389 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gg6r\" (UniqueName: \"kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276410 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276410 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276458 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276431 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276502 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276502 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276519 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276575 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276594 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276610 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276626 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276643 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276661 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276677 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276758 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276782 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276802 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276442 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.277071 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.277752 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276449 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276486 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276511 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.283660 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.283758 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.276516 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.284364 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.284944 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.297700 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.303054 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gg6r\" (UniqueName: \"kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r\") pod \"ovnkube-node-jttsv\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.313078 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.327532 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.340241 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.355733 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.368506 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.369831 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.378589 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28767351-ec5c-4f9e-8b01-2954eaf4ea30-proxy-tls\") pod \"machine-config-daemon-kqsqg\" (UID: \"28767351-ec5c-4f9e-8b01-2954eaf4ea30\") " pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.384017 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.403763 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.418868 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.430867 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.436741 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.445052 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: W0130 21:40:27.449009 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34ce4851_1ecc_47da_89ca_09894eb0908a.slice/crio-f18a371d736e6911b0f592f8daaea8c3e8cd37b3a1facadbee20dabf9d3b9ce4 WatchSource:0}: Error finding container f18a371d736e6911b0f592f8daaea8c3e8cd37b3a1facadbee20dabf9d3b9ce4: Status 404 returned error can't find the container with id f18a371d736e6911b0f592f8daaea8c3e8cd37b3a1facadbee20dabf9d3b9ce4 Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.460298 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.480209 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.498186 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.519098 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.551477 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.555465 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.592239 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.630025 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:27 crc kubenswrapper[4979]: W0130 21:40:27.657547 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28767351_ec5c_4f9e_8b01_2954eaf4ea30.slice/crio-b861be32c99469b053f13d329d408c0d996100abdc71ad924f15bd103ba423ae WatchSource:0}: Error finding container b861be32c99469b053f13d329d408c0d996100abdc71ad924f15bd103ba423ae: Status 404 returned error can't find the container with id b861be32c99469b053f13d329d408c0d996100abdc71ad924f15bd103ba423ae Jan 30 21:40:27 crc kubenswrapper[4979]: I0130 21:40:27.670513 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:27Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.026309 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 17:42:58.938881971 +0000 UTC Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.240481 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" exitCode=0 Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.240558 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.240620 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"f18a371d736e6911b0f592f8daaea8c3e8cd37b3a1facadbee20dabf9d3b9ce4"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.242375 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.245152 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerStarted","Data":"553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.247183 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d" exitCode=0 Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.247315 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.249563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.249610 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.249624 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"b861be32c99469b053f13d329d408c0d996100abdc71ad924f15bd103ba423ae"} Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.259098 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.274701 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.287669 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.305011 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.320431 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.341469 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.358237 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.375085 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.391288 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.406897 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.427723 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.442947 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.460115 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.477016 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.495473 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.516654 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.532592 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.546937 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.562428 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.579269 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.595309 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.595359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.595475 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.595505 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.595537 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:32.595521398 +0000 UTC m=+28.556768431 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.595613 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:32.59558641 +0000 UTC m=+28.556833513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.600695 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.620056 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.636789 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.653202 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.677134 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.696316 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.696443 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.696481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696590 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696598 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696609 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696625 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696626 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696641 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696599 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:32.696559273 +0000 UTC m=+28.657806346 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696725 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:32.696703057 +0000 UTC m=+28.657950090 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:28 crc kubenswrapper[4979]: E0130 21:40:28.696742 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:32.696734178 +0000 UTC m=+28.657981321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:28 crc kubenswrapper[4979]: I0130 21:40:28.712802 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.027432 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:24:51.209630003 +0000 UTC Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.069274 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.069393 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.069274 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:29 crc kubenswrapper[4979]: E0130 21:40:29.069479 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:29 crc kubenswrapper[4979]: E0130 21:40:29.069566 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:29 crc kubenswrapper[4979]: E0130 21:40:29.069648 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.257602 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae" exitCode=0 Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.257701 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.263670 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.263764 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.263803 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.264060 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.264091 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.275503 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.293124 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.312529 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.328991 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.352602 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.361677 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-f2xld"] Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.365793 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.371445 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.371550 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.371589 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.371753 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.391535 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.406338 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zncl\" (UniqueName: \"kubernetes.io/projected/65d4cf3f-dc90-408a-9652-740d7472fb39-kube-api-access-5zncl\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.406374 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65d4cf3f-dc90-408a-9652-740d7472fb39-serviceca\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.406424 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/65d4cf3f-dc90-408a-9652-740d7472fb39-host\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.410271 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.428906 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.445756 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.461842 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.477326 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.492982 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.506016 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.506862 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zncl\" (UniqueName: \"kubernetes.io/projected/65d4cf3f-dc90-408a-9652-740d7472fb39-kube-api-access-5zncl\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.506906 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65d4cf3f-dc90-408a-9652-740d7472fb39-serviceca\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.506959 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/65d4cf3f-dc90-408a-9652-740d7472fb39-host\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.507095 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/65d4cf3f-dc90-408a-9652-740d7472fb39-host\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.508451 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/65d4cf3f-dc90-408a-9652-740d7472fb39-serviceca\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.521686 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.526977 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zncl\" (UniqueName: \"kubernetes.io/projected/65d4cf3f-dc90-408a-9652-740d7472fb39-kube-api-access-5zncl\") pod \"node-ca-f2xld\" (UID: \"65d4cf3f-dc90-408a-9652-740d7472fb39\") " pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.537675 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.550375 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.564328 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.577767 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.593141 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.614277 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.650784 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.691574 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f2xld" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.692330 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.734602 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.775289 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.814857 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.855248 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:29 crc kubenswrapper[4979]: I0130 21:40:29.891066 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:29Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.027832 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:58:38.425999164 +0000 UTC Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.270407 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f2xld" event={"ID":"65d4cf3f-dc90-408a-9652-740d7472fb39","Type":"ContainerStarted","Data":"da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad"} Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.270492 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f2xld" event={"ID":"65d4cf3f-dc90-408a-9652-740d7472fb39","Type":"ContainerStarted","Data":"9d57eac96d748a9a1f760d1fdd0b0fb1bb1445b853af54abb7355c05fa5e86d5"} Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.273612 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38" exitCode=0 Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.273711 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38"} Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.278468 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.291019 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.306597 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.320950 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.341196 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.355262 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.368599 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.382773 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.396707 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.411672 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.427776 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.440519 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.460020 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.476507 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.490380 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.507389 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.533099 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.572403 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.617569 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.654490 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.695245 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.731966 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.772858 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.810556 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.853277 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.890150 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.935245 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:30 crc kubenswrapper[4979]: I0130 21:40:30.973689 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:30Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.013025 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.028287 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:18:24.995427886 +0000 UTC Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.068846 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.068861 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.069010 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.069166 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.069185 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.069370 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.229826 4979 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.232252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.232294 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.232305 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.232467 4979 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.240513 4979 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.240818 4979 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.242060 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.242107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.242127 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.242149 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.242161 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.260880 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.265428 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.265462 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.265473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.265488 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.265499 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.285554 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510" exitCode=0 Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.285618 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510"} Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.286142 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.291208 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.291249 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.291261 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.291278 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.291292 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.298667 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.305300 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.310198 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.310238 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.310248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.310267 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.310287 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.317060 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.324246 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.327948 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.328059 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.328077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.328107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.328132 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.328753 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.341723 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: E0130 21:40:31.341849 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.343421 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.344317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.344358 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.344369 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.344386 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.344397 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.358564 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.373681 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.387916 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.402882 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.419065 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.447216 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.447263 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.447279 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.447302 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.447315 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.451726 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.492437 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.540060 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.551405 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.551450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.551460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.551480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.551494 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.573803 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.619901 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.653759 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.653802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.653813 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.653830 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.653841 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.756632 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.756684 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.756695 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.756717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.756732 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.860614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.860668 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.860681 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.860702 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.860715 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.963661 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.963704 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.963713 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.963732 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:31 crc kubenswrapper[4979]: I0130 21:40:31.963744 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:31Z","lastTransitionTime":"2026-01-30T21:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.029149 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:30:43.412397541 +0000 UTC Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.067719 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.067898 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.067916 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.067943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.067960 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.171933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.172017 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.172092 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.172129 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.172153 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.276000 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.276072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.276082 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.276106 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.276117 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.295763 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6" exitCode=0 Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.296055 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.303578 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.312098 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.329901 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.342743 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.359993 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.378621 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.378701 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.378719 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.378746 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.378762 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.379989 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.396532 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.414750 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.430727 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.445730 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.463069 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.480630 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.481446 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.481481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.481491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.481510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.481522 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.501926 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.522940 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.541969 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.584937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.584974 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.584984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.585000 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.585010 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.641926 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.642017 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.642267 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.642359 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:40.642332749 +0000 UTC m=+36.603579822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.642427 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.642482 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:40.642464343 +0000 UTC m=+36.603711416 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.688564 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.688608 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.688619 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.688637 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.688650 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.743568 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.743778 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.743868 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:40.743827047 +0000 UTC m=+36.705074140 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.743958 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744011 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744077 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744098 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744180 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:40.744154346 +0000 UTC m=+36.705401419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744291 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744331 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744347 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:32 crc kubenswrapper[4979]: E0130 21:40:32.744426 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:40.744406412 +0000 UTC m=+36.705653445 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.791608 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.791667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.791682 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.791709 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.791726 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.895167 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.895224 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.895239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.895259 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.895273 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.998899 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.998954 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.998973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.998996 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:32 crc kubenswrapper[4979]: I0130 21:40:32.999011 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:32Z","lastTransitionTime":"2026-01-30T21:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.030329 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:51:57.898573758 +0000 UTC Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.069508 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.069607 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:33 crc kubenswrapper[4979]: E0130 21:40:33.069766 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.069873 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:33 crc kubenswrapper[4979]: E0130 21:40:33.070018 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:33 crc kubenswrapper[4979]: E0130 21:40:33.070188 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.101771 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.101858 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.101877 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.101908 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.101930 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.204748 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.204793 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.204803 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.204821 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.204833 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.306856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.306901 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.306926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.306950 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.306965 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.311522 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e" containerID="fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f" exitCode=0 Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.311574 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerDied","Data":"fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.327748 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.343933 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.360000 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.375023 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.391652 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.408228 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.409131 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.409181 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.409191 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.409213 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.409225 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.422989 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.438093 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.453149 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.470433 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.487465 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.499931 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.512276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.512328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.512340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.512362 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.512374 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.515794 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.540108 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.615467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.615893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.615904 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.615926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.615937 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.718608 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.718653 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.718662 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.718679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.718689 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.820949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.820999 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.821010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.821057 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.821073 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.929420 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.929472 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.929481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.929505 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:33 crc kubenswrapper[4979]: I0130 21:40:33.929517 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:33Z","lastTransitionTime":"2026-01-30T21:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.030498 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:53:12.520429592 +0000 UTC Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.032510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.032551 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.032560 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.032579 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.032591 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.135602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.135646 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.135658 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.135677 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.135688 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.239517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.239563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.239575 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.239597 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.239611 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.320194 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" event={"ID":"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e","Type":"ContainerStarted","Data":"11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.325831 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.326690 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.326762 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.337638 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.343252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.343460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.343527 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.343617 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.343718 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.354664 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.359438 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.362168 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.367786 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.383259 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.399176 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.416208 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.434971 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.446485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.446529 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.446541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.446563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.446575 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.456437 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.471830 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.486334 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.499534 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.510320 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.521208 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.532659 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.544799 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.548814 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.548856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.548868 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.548884 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.548896 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.564011 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.578269 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.590934 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.602259 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.616159 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.627611 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.637408 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.647207 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.650766 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.650812 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.650826 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.650845 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.650857 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.665463 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.676701 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.696356 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.712893 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.729955 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:34Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.753376 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.753424 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.753439 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.753457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.753471 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.777917 4979 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.856489 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.856555 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.856563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.856579 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.856588 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.960996 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.961404 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.961417 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.961437 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:34 crc kubenswrapper[4979]: I0130 21:40:34.961452 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:34Z","lastTransitionTime":"2026-01-30T21:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.031112 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:26:58.92575798 +0000 UTC Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.065481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.065573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.065590 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.065611 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.065625 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.069245 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:35 crc kubenswrapper[4979]: E0130 21:40:35.069438 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.069581 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.069584 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:35 crc kubenswrapper[4979]: E0130 21:40:35.069681 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:35 crc kubenswrapper[4979]: E0130 21:40:35.069775 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.091643 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.110420 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.126479 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.145643 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.168121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.168159 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.168170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.168190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.168202 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.175301 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.199359 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.218262 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.234874 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.251843 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.265751 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.270942 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.270979 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.270990 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.271006 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.271017 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.278917 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.301835 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.323358 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.329141 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.339872 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.374202 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.374264 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.374283 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.374306 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.374325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.477560 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.477615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.477627 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.477649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.477663 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.580958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.581005 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.581015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.581054 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.581065 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.684410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.684485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.684509 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.684543 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.684567 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.788195 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.788258 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.788276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.788305 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.788318 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.891436 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.891501 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.891517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.891541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.891554 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.994421 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.994469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.994481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.994500 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:35 crc kubenswrapper[4979]: I0130 21:40:35.994513 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:35Z","lastTransitionTime":"2026-01-30T21:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.032243 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:51:16.520386678 +0000 UTC Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.097102 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.097153 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.097166 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.097196 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.097209 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.200141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.200215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.200235 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.200263 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.200283 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.303606 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.303676 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.303693 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.303719 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.303734 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.332087 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.407467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.407549 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.407573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.407607 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.407631 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.511002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.511687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.511868 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.512095 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.512286 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.615503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.615574 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.615598 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.615627 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.615650 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.718848 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.718916 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.718934 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.718963 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.718980 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.822252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.822325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.822345 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.822375 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.822394 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.925877 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.925956 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.925973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.926003 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:36 crc kubenswrapper[4979]: I0130 21:40:36.926024 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:36Z","lastTransitionTime":"2026-01-30T21:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.029105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.029154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.029167 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.029185 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.029198 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.033136 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:16:39.941201941 +0000 UTC Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.069174 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.069199 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:37 crc kubenswrapper[4979]: E0130 21:40:37.069347 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:37 crc kubenswrapper[4979]: E0130 21:40:37.069384 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.069288 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:37 crc kubenswrapper[4979]: E0130 21:40:37.069464 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.131549 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.131610 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.131619 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.131633 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.131642 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.234440 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.234493 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.234508 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.234530 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.234557 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.336553 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.336602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.336613 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.336633 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.336646 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.440479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.440552 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.440571 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.440595 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.440612 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.543530 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.543602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.543624 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.543673 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.543697 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.646265 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.646304 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.646318 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.646335 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.646347 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.749058 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.749104 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.749115 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.749141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.749160 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.855745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.855809 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.855818 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.855832 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.855842 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.958993 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.959091 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.959107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.959130 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:37 crc kubenswrapper[4979]: I0130 21:40:37.959147 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:37Z","lastTransitionTime":"2026-01-30T21:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.033725 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:45:36.440946489 +0000 UTC Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.061487 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.061519 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.061546 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.061564 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.061573 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.164255 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.164319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.164340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.164368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.164390 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.267169 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.267218 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.267231 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.267248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.267263 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.341071 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/0.log" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.343834 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940" exitCode=1 Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.343874 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.344868 4979 scope.go:117] "RemoveContainer" containerID="feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.360098 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.388154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.388209 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.388227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.388252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.388267 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.403672 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.427592 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.449552 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.464371 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.476329 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.489443 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.491241 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.491276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.491288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.491304 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.491316 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.503323 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.518144 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.532003 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.555214 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.572366 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.590485 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.594935 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.594975 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.594987 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.595005 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.595017 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.608767 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:38Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.700023 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.700116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.700131 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.700157 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.700182 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.803153 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.803225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.803242 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.803271 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.803290 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.906683 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.906764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.906789 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.906830 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:38 crc kubenswrapper[4979]: I0130 21:40:38.906857 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:38Z","lastTransitionTime":"2026-01-30T21:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.010881 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.010957 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.010977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.011002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.011022 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.034364 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 17:26:54.226699242 +0000 UTC Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.069382 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.069463 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:39 crc kubenswrapper[4979]: E0130 21:40:39.069577 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:39 crc kubenswrapper[4979]: E0130 21:40:39.069795 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.069884 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:39 crc kubenswrapper[4979]: E0130 21:40:39.070539 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.113907 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.114293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.114483 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.114617 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.114746 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.184305 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9"] Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.185273 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.187655 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.188320 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.214105 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.218531 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.218585 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.218598 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.218620 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.218634 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.253059 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.276016 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.302927 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.319593 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321383 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321435 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321504 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321799 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vl5d\" (UniqueName: \"kubernetes.io/projected/cb7a0992-0b0f-4219-ac47-fb6021840903-kube-api-access-2vl5d\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321866 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321905 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7a0992-0b0f-4219-ac47-fb6021840903-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.321985 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.339354 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.353664 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.371068 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.389942 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.404954 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.420974 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.422763 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vl5d\" (UniqueName: \"kubernetes.io/projected/cb7a0992-0b0f-4219-ac47-fb6021840903-kube-api-access-2vl5d\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.422830 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.422868 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7a0992-0b0f-4219-ac47-fb6021840903-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.422906 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424151 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424647 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424690 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424706 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.424749 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.425892 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7a0992-0b0f-4219-ac47-fb6021840903-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.433593 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7a0992-0b0f-4219-ac47-fb6021840903-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.436550 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.445239 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vl5d\" (UniqueName: \"kubernetes.io/projected/cb7a0992-0b0f-4219-ac47-fb6021840903-kube-api-access-2vl5d\") pod \"ovnkube-control-plane-749d76644c-xz6s9\" (UID: \"cb7a0992-0b0f-4219-ac47-fb6021840903\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.460811 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.496532 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.501378 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.508255 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:39 crc kubenswrapper[4979]: W0130 21:40:39.516854 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb7a0992_0b0f_4219_ac47_fb6021840903.slice/crio-ee0f3ca1e708bede0ffcd0b98820f0e35197dcaaa24c2f693aa9096403bcba24 WatchSource:0}: Error finding container ee0f3ca1e708bede0ffcd0b98820f0e35197dcaaa24c2f693aa9096403bcba24: Status 404 returned error can't find the container with id ee0f3ca1e708bede0ffcd0b98820f0e35197dcaaa24c2f693aa9096403bcba24 Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.527121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.527166 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.527177 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.527196 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.527211 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.630700 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.630742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.630750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.630767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.630778 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.733789 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.733850 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.733860 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.733874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.733905 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.836793 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.836853 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.836863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.836885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.836899 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.940514 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.940578 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.940599 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.940624 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:39 crc kubenswrapper[4979]: I0130 21:40:39.940651 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:39Z","lastTransitionTime":"2026-01-30T21:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.035321 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:50:46.553411228 +0000 UTC Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.044840 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.044896 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.044910 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.044941 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.044961 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.148876 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.148937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.148949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.148970 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.148983 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.252449 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.252517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.252530 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.252555 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.252570 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.355962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.356090 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.356121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.356153 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.356178 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.357713 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/0.log" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.362013 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.364219 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" event={"ID":"cb7a0992-0b0f-4219-ac47-fb6021840903","Type":"ContainerStarted","Data":"ee0f3ca1e708bede0ffcd0b98820f0e35197dcaaa24c2f693aa9096403bcba24"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.459541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.459623 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.459648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.459682 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.459708 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.565509 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.565587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.565610 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.565640 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.565666 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.668638 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.668722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.668747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.668782 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.668806 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.716145 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-pk47q"] Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.716733 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.716816 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.736413 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.738934 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.739004 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.739182 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.739310 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.739272911 +0000 UTC m=+52.700520004 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.739441 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.739597 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.739563139 +0000 UTC m=+52.700810202 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.760384 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.772288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.772365 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.772393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.772427 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.772451 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.779157 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.798352 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.817575 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.839257 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.840070 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840225 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.840183103 +0000 UTC m=+52.801430136 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.840304 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbmzk\" (UniqueName: \"kubernetes.io/projected/d0632938-c88a-4c22-b0e7-8f7473532f07-kube-api-access-jbmzk\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.840465 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.840511 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.840549 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840702 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840723 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840739 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840733 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840766 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840780 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840793 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.840781289 +0000 UTC m=+52.802028442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.840838 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.84081687 +0000 UTC m=+52.802063923 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.857285 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.872565 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.875402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.875470 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.875494 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.875528 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.875553 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.889113 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.908119 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.921610 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.941920 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.942061 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbmzk\" (UniqueName: \"kubernetes.io/projected/d0632938-c88a-4c22-b0e7-8f7473532f07-kube-api-access-jbmzk\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.942260 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: E0130 21:40:40.942415 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:41.442374619 +0000 UTC m=+37.403621692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.951544 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.960837 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbmzk\" (UniqueName: \"kubernetes.io/projected/d0632938-c88a-4c22-b0e7-8f7473532f07-kube-api-access-jbmzk\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.973675 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.979243 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.979290 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.979308 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.979335 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.979360 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:40Z","lastTransitionTime":"2026-01-30T21:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:40 crc kubenswrapper[4979]: I0130 21:40:40.987966 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:40Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.003354 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.020872 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.036080 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:40:27.320127802 +0000 UTC Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.069116 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.069187 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.069255 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.069412 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.069556 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.069692 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.082926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.082993 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.083014 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.083081 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.083103 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.186752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.186832 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.186857 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.186887 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.186905 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.290701 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.290783 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.290804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.290834 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.290854 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.394275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.394356 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.394371 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.394393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.394407 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.448818 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.449010 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.449115 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:42.449093745 +0000 UTC m=+38.410340788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.497269 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.497309 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.497320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.497339 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.497352 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.600863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.600930 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.600950 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.600982 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.601008 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.705475 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.705556 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.705582 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.705616 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.705643 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.744204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.744253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.744269 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.744292 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.744309 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.767459 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.773240 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.773306 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.773324 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.773352 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.773375 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.795295 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.801876 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.801935 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.801949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.801971 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.801989 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.824890 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.831624 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.831764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.831804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.831839 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.831863 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.855117 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.861748 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.861792 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.861810 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.861837 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.861855 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.884361 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:41 crc kubenswrapper[4979]: E0130 21:40:41.884663 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.887209 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.887253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.887269 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.887292 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.887308 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.990634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.990712 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.990731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.990762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:41 crc kubenswrapper[4979]: I0130 21:40:41.990786 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:41Z","lastTransitionTime":"2026-01-30T21:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.036901 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:11:01.261467903 +0000 UTC Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.068942 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:42 crc kubenswrapper[4979]: E0130 21:40:42.069118 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.094180 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.094279 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.094300 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.094325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.094348 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.197397 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.197445 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.197457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.197475 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.197488 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.305437 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.305516 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.305536 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.305564 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.305582 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.375199 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.394586 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.409168 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.409218 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.409230 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.409250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.409262 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.412241 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.430238 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.454830 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.460922 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:42 crc kubenswrapper[4979]: E0130 21:40:42.461190 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:42 crc kubenswrapper[4979]: E0130 21:40:42.461313 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:44.461277063 +0000 UTC m=+40.422524136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.471128 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.487909 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.508236 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.513348 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.513430 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.513456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.513492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.513513 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.524193 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.549937 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.574650 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.601305 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.616172 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.616260 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.616288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.616360 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.616375 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.619644 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.639323 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.660399 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.682489 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.706849 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.718766 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.718822 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.718835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.718855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.718869 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.822241 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.822305 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.822320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.822347 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.822366 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.925346 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.925405 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.925416 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.925435 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:42 crc kubenswrapper[4979]: I0130 21:40:42.925447 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:42Z","lastTransitionTime":"2026-01-30T21:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.028989 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.029043 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.029056 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.029072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.029083 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.037378 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:22:18.801961418 +0000 UTC Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.069207 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.069207 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:43 crc kubenswrapper[4979]: E0130 21:40:43.069369 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.069207 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:43 crc kubenswrapper[4979]: E0130 21:40:43.069533 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:43 crc kubenswrapper[4979]: E0130 21:40:43.069448 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.131602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.131655 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.131668 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.131689 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.131706 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.234542 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.234615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.234627 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.234652 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.234695 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.338420 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.338471 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.338482 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.338502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.338518 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.385522 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/1.log" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.386634 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/0.log" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.390068 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4" exitCode=1 Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.390145 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.390239 4979 scope.go:117] "RemoveContainer" containerID="feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.391963 4979 scope.go:117] "RemoveContainer" containerID="202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4" Jan 30 21:40:43 crc kubenswrapper[4979]: E0130 21:40:43.392518 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.392695 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" event={"ID":"cb7a0992-0b0f-4219-ac47-fb6021840903","Type":"ContainerStarted","Data":"f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.392721 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" event={"ID":"cb7a0992-0b0f-4219-ac47-fb6021840903","Type":"ContainerStarted","Data":"ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.406714 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.422181 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.433980 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.442134 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.442193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.442205 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.442228 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.442242 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.454233 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.473572 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.489695 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.509142 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.523382 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.538661 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.545307 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.545358 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.545368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.545384 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.545395 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.554211 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.570574 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.595951 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.613100 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.625119 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.641558 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.647980 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.648021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.648061 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.648081 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.648092 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.655683 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.670134 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.684718 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.701699 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.716674 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.731754 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.745557 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.750618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.750662 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.750677 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.750698 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.750713 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.762770 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.774904 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.790137 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.804775 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.823490 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.835669 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.848760 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.853808 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.853854 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.853864 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.853882 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.853894 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.868235 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.883382 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.898203 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.956928 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.956981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.956993 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.957012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:43 crc kubenswrapper[4979]: I0130 21:40:43.957024 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:43Z","lastTransitionTime":"2026-01-30T21:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.037755 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:22:38.311155926 +0000 UTC Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.059940 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.060004 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.060024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.060083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.060106 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.069350 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:44 crc kubenswrapper[4979]: E0130 21:40:44.069588 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.168492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.168602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.168615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.168632 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.168642 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.272387 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.272451 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.272469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.272494 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.272513 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.375998 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.376112 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.376127 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.376151 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.376165 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.397338 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/1.log" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.479841 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.479929 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.479945 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.479973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.479991 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.484709 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:44 crc kubenswrapper[4979]: E0130 21:40:44.485070 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:44 crc kubenswrapper[4979]: E0130 21:40:44.485184 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:48.485150617 +0000 UTC m=+44.446397830 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.583419 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.583487 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.583499 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.583522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.583534 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.686517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.686580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.686597 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.686618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.686630 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.789754 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.789801 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.789811 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.789829 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.789839 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.892273 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.892330 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.892351 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.892370 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:44 crc kubenswrapper[4979]: I0130 21:40:44.892384 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:44Z","lastTransitionTime":"2026-01-30T21:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.021737 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.021800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.021816 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.021838 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.021856 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.038938 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:30:23.914545118 +0000 UTC Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.069116 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.069168 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.069301 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:45 crc kubenswrapper[4979]: E0130 21:40:45.069314 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:45 crc kubenswrapper[4979]: E0130 21:40:45.069447 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:45 crc kubenswrapper[4979]: E0130 21:40:45.069626 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.087218 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.101599 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.117263 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.125415 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.125452 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.125463 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.125479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.125495 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.133020 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.149430 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.163531 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.180668 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.199416 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.215225 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.228725 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.229098 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.229272 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.229380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.229470 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.230422 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.246175 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.296589 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7ca0c57ec7bff4c652c3f42edd1ad61b0d190a954df8e0a9a71be67895940\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:38Z\\\",\\\"message\\\":\\\" 5 for removal\\\\nI0130 21:40:37.934362 6305 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:40:37.934385 6305 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:40:37.934401 6305 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:40:37.934400 6305 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:40:37.934408 6305 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:40:37.934435 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 21:40:37.934442 6305 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 21:40:37.934454 6305 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 21:40:37.934467 6305 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:40:37.934484 6305 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 21:40:37.934507 6305 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 21:40:37.934526 6305 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 21:40:37.934551 6305 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:40:37.934564 6305 factory.go:656] Stopping watch factory\\\\nI0130 21:40:37.934580 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:37.934618 6305 handler.go:208] Removed *v1.Pod event handler 3\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.313483 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.327927 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.332286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.332328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.332338 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.332352 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.332362 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.346835 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.361640 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.434980 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.435013 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.435021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.435058 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.435069 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.539252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.539309 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.539325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.539348 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.539365 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.642927 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.642983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.642999 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.643021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.643055 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.746647 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.746714 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.746728 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.746751 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.746766 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.850949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.851025 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.851080 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.851158 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.851177 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.953677 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.953721 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.953733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.953750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:45 crc kubenswrapper[4979]: I0130 21:40:45.953762 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:45Z","lastTransitionTime":"2026-01-30T21:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.039481 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:38:16.947750316 +0000 UTC Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.057005 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.057084 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.057101 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.057124 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.057140 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.068931 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:46 crc kubenswrapper[4979]: E0130 21:40:46.069177 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.159853 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.159918 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.159945 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.159977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.160001 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.263733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.263769 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.263777 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.263792 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.263802 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.367784 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.368182 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.368355 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.368575 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.368741 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.472459 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.472529 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.472551 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.472577 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.472597 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.575645 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.575680 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.575691 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.575708 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.575720 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.679025 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.679107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.679121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.679140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.679156 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.781402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.781460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.781476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.781498 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.781510 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.884860 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.884923 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.884943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.884971 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.884988 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.988453 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.988513 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.988526 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.988547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:46 crc kubenswrapper[4979]: I0130 21:40:46.988564 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:46Z","lastTransitionTime":"2026-01-30T21:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.040387 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:06:53.587982159 +0000 UTC Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.069149 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.069148 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:47 crc kubenswrapper[4979]: E0130 21:40:47.069333 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.069166 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:47 crc kubenswrapper[4979]: E0130 21:40:47.069593 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:47 crc kubenswrapper[4979]: E0130 21:40:47.069663 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.091951 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.092081 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.092101 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.092127 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.092179 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.196311 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.196734 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.196876 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.197097 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.197285 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.301237 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.301311 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.301332 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.301357 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.301374 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.404013 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.404097 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.404108 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.404128 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.404143 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.507452 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.507522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.507547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.507575 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.507594 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.610377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.610753 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.610819 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.610882 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.610940 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.714492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.714542 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.714554 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.714574 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.714587 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.817684 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.817762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.817774 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.817792 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.817803 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.921249 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.921303 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.921316 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.921337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:47 crc kubenswrapper[4979]: I0130 21:40:47.921352 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:47Z","lastTransitionTime":"2026-01-30T21:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.024009 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.024340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.024403 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.024469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.024586 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.041438 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:15:10.693067657 +0000 UTC Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.069171 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:48 crc kubenswrapper[4979]: E0130 21:40:48.069329 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.128337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.128401 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.128416 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.128442 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.128458 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.231491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.231528 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.231538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.231554 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.231566 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.334497 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.334538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.334547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.334567 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.334577 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.440047 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.440105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.440121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.440142 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.440183 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.535331 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:48 crc kubenswrapper[4979]: E0130 21:40:48.535595 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:48 crc kubenswrapper[4979]: E0130 21:40:48.535734 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:40:56.53570251 +0000 UTC m=+52.496949723 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.543780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.543831 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.543841 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.543864 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.543875 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.646885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.646923 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.646940 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.646961 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.646971 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.750293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.750340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.750352 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.750372 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.750385 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.853198 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.853251 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.853264 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.853284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.853300 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.955387 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.955445 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.955457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.955473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:48 crc kubenswrapper[4979]: I0130 21:40:48.955483 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:48Z","lastTransitionTime":"2026-01-30T21:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.041629 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:41:02.433061209 +0000 UTC Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.057956 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.058025 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.058055 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.058077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.058088 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.069383 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.069416 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.069383 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:49 crc kubenswrapper[4979]: E0130 21:40:49.069562 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:49 crc kubenswrapper[4979]: E0130 21:40:49.069724 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:49 crc kubenswrapper[4979]: E0130 21:40:49.069849 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.160491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.160856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.160980 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.161140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.161233 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.264227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.264281 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.264296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.264317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.264332 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.367051 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.367373 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.367492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.367587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.367669 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.471716 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.471764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.471777 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.471795 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.471806 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.575210 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.575290 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.575307 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.575329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.575342 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.678618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.679017 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.679234 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.679405 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.679543 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.783536 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.783597 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.783607 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.783625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.783637 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.886580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.886649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.886672 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.886699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.886720 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.990551 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.990619 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.990637 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.990665 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:49 crc kubenswrapper[4979]: I0130 21:40:49.990685 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:49Z","lastTransitionTime":"2026-01-30T21:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.042755 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:21:58.92873465 +0000 UTC Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.069118 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:50 crc kubenswrapper[4979]: E0130 21:40:50.069315 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.093364 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.093412 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.093425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.093444 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.093458 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.196135 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.196227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.196245 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.196275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.196295 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.299427 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.299479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.299491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.299508 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.299520 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.402961 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.403024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.403077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.403108 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.403127 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.506413 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.506465 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.506480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.506504 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.506520 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.610302 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.610361 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.610381 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.610410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.610430 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.713981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.714063 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.714076 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.714101 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.714114 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.817015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.817103 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.817116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.817134 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.817149 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.919536 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.919600 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.919619 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.919641 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:50 crc kubenswrapper[4979]: I0130 21:40:50.919655 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:50Z","lastTransitionTime":"2026-01-30T21:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.022689 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.022765 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.022775 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.022791 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.022801 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.043322 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:58:18.792489055 +0000 UTC Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.069961 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:51 crc kubenswrapper[4979]: E0130 21:40:51.070192 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.070276 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.070356 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:51 crc kubenswrapper[4979]: E0130 21:40:51.070484 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:51 crc kubenswrapper[4979]: E0130 21:40:51.070656 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.125756 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.125817 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.125832 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.125866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.125888 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.229535 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.229587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.229597 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.229614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.229626 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.332491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.332529 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.332537 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.332578 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.332590 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.435614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.435754 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.435828 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.435914 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.435940 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.540152 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.540207 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.540225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.540251 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.540269 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.643702 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.643778 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.643796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.643824 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.643843 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.747931 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.748082 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.748108 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.748134 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.748148 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.898943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.899010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.899027 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.899074 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:51 crc kubenswrapper[4979]: I0130 21:40:51.899089 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:51Z","lastTransitionTime":"2026-01-30T21:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.002615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.002692 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.002705 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.002731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.002746 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.025337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.025438 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.025453 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.025480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.025496 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.043522 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:41:44.753192857 +0000 UTC Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.050162 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.057086 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.057169 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.057191 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.057224 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.057251 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.068665 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.068824 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.077052 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.082537 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.082582 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.082591 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.082611 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.082623 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.095921 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.099949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.099997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.100007 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.100024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.100052 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.114510 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.118911 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.118958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.118970 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.118990 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.119003 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.131811 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:52 crc kubenswrapper[4979]: E0130 21:40:52.131951 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.135006 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.135093 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.135115 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.135136 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.135150 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.238854 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.238911 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.238926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.238950 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.238962 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.342090 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.342143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.342154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.342174 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.342185 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.444882 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.444950 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.444962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.444986 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.445001 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.546754 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.546788 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.546797 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.546813 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.546824 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.649220 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.649259 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.649272 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.649292 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.649302 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.751622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.751690 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.751703 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.751721 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.751734 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.854918 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.854989 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.855002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.855023 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.855067 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.958087 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.958131 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.958143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.958163 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:52 crc kubenswrapper[4979]: I0130 21:40:52.958177 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:52Z","lastTransitionTime":"2026-01-30T21:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.044213 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:07:15.270365459 +0000 UTC Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.061280 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.061339 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.061353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.061375 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.061386 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.069882 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.069939 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.069967 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:53 crc kubenswrapper[4979]: E0130 21:40:53.070103 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:53 crc kubenswrapper[4979]: E0130 21:40:53.070234 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:53 crc kubenswrapper[4979]: E0130 21:40:53.070397 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.163712 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.164321 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.164393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.164463 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.164534 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.267724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.267820 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.267843 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.267875 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.267901 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.371824 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.372186 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.372208 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.372228 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.372242 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.487759 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.488825 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.488991 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.489162 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.489385 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.592419 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.592473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.592485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.592504 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.592516 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.696357 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.696431 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.696452 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.696479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.696498 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.799151 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.799260 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.799291 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.799327 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.799355 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.902840 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.902884 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.902893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.902907 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:53 crc kubenswrapper[4979]: I0130 21:40:53.902917 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:53Z","lastTransitionTime":"2026-01-30T21:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.005085 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.005147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.005160 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.005181 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.005192 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.044761 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:24:49.023718232 +0000 UTC Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.069594 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:54 crc kubenswrapper[4979]: E0130 21:40:54.069770 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.071107 4979 scope.go:117] "RemoveContainer" containerID="202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.086231 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.103920 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.109661 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.109716 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.109730 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.109752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.109765 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.118456 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.134265 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.148144 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.167056 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.183863 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.197750 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214125 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214184 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214200 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214241 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.214719 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.230461 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.253220 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.280431 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.297634 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.308345 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.320193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.320248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.320257 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.320273 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.320304 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.323633 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.336836 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.422597 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.422629 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.422636 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.422665 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.422675 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.438527 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/1.log" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.441255 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.441401 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.456410 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.475631 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.490555 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.507242 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.520099 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.524909 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.524962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.524973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.524991 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.525004 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.551093 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.572817 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.595993 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.612486 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.627574 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.627611 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.627621 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.627636 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.627647 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.633210 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.649085 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.661687 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.678813 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.693262 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.706739 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.726664 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.730372 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.730422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.730432 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.730450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.730465 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.833579 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.833636 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.833649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.833673 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.833688 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.937523 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.937614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.937634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.937667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:54 crc kubenswrapper[4979]: I0130 21:40:54.937688 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:54Z","lastTransitionTime":"2026-01-30T21:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.041190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.041234 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.041245 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.041263 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.041278 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.045905 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:00:13.894151515 +0000 UTC Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.069166 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.069295 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:55 crc kubenswrapper[4979]: E0130 21:40:55.069342 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.069318 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:55 crc kubenswrapper[4979]: E0130 21:40:55.069496 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:55 crc kubenswrapper[4979]: E0130 21:40:55.069771 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.087462 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.100077 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.114248 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.126760 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.137553 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.143901 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.143937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.143948 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.143964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.143976 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.148768 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.161026 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.175828 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.192662 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.212756 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.227562 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.248009 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.248138 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.248169 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.248204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.248227 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.252086 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.271933 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.290917 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.306318 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.321299 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.350883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.351319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.351396 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.351502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.351594 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.446859 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/2.log" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.447812 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/1.log" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.450990 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" exitCode=1 Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.451055 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.451099 4979 scope.go:117] "RemoveContainer" containerID="202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.451959 4979 scope.go:117] "RemoveContainer" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" Jan 30 21:40:55 crc kubenswrapper[4979]: E0130 21:40:55.452166 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.455262 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.455294 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.455305 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.455324 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.455337 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.469555 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.488394 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.516148 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202b69f4113465063a62324e7861feabdf2e770c3251e2abf8d8535781ea8ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"message\\\":\\\"Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d4efc4a8-c514-4a6b-901c-2953978b50d3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6011affd-30a6-4be6-872d-e4cf1ca780cf}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 21:40:43.158724 6444 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 21:40:43.158810 6444 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 21:40:43.158845 6444 ovnkube.go:599] Stopped ovnkube\\\\nI0130 21:40:43.158871 6444 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 21:40:43.158890 6444 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0130 21:40:43.158907 6444 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.26282ms\\\\nI0130 21:40:43.158921 6444 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0130 21:40:43.158965 6444 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.528009 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.539587 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.558327 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.558410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.558422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.558442 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.558458 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.559628 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.576494 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.593741 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.605764 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.619951 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.634870 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.651208 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.662259 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.662319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.662335 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.662360 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.662378 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.667083 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.679234 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.698889 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.710430 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.765944 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.766000 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.766016 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.766084 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.766104 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.870088 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.870188 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.870215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.870248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.870270 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973189 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973245 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973258 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973278 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973291 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:55Z","lastTransitionTime":"2026-01-30T21:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:55 crc kubenswrapper[4979]: I0130 21:40:55.973987 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.046332 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:58:51.895008686 +0000 UTC Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.068645 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.068831 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.076792 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.076838 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.076847 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.076866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.076877 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.180111 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.180180 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.180196 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.180221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.180240 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.282920 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.282983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.282994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.283013 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.283028 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.385794 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.385855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.385865 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.385885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.385900 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.463090 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/2.log" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.468517 4979 scope.go:117] "RemoveContainer" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.468761 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.485541 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.490118 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.490170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.490185 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.490211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.490228 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.503980 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.519420 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.542387 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.557559 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.571254 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.587578 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.592969 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.593012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.593405 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.593485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.593570 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.593646 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.593152 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.594286 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:12.594264007 +0000 UTC m=+68.555511040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.604652 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.619094 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.633724 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.645645 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.661484 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.677359 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.695375 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.695406 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.695415 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.695430 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.695440 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.697767 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.716359 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.734268 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.795434 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.795504 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.795667 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.795736 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.795842 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:28.795814433 +0000 UTC m=+84.757061466 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.795884 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:28.795876314 +0000 UTC m=+84.757123347 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.797745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.797821 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.797844 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.797869 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.797887 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.896963 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897215 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:41:28.897169926 +0000 UTC m=+84.858416959 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.897297 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.897340 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897501 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897528 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897541 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897554 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897580 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897599 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897603 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:28.897588458 +0000 UTC m=+84.858835491 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:56 crc kubenswrapper[4979]: E0130 21:40:56.897652 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:28.897639919 +0000 UTC m=+84.858887132 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.900788 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.900818 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.900828 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.900843 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:56 crc kubenswrapper[4979]: I0130 21:40:56.900853 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:56Z","lastTransitionTime":"2026-01-30T21:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.003581 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.003662 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.003676 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.003697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.003712 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.047143 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:20:00.586491798 +0000 UTC Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.069123 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.069204 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:57 crc kubenswrapper[4979]: E0130 21:40:57.069331 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.069370 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:57 crc kubenswrapper[4979]: E0130 21:40:57.069500 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:57 crc kubenswrapper[4979]: E0130 21:40:57.069628 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.106991 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.107093 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.107112 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.107140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.107159 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.209773 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.209835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.209852 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.209880 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.209900 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.315337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.315399 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.315411 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.315433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.315446 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.419222 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.419460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.419521 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.419634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.419716 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.523116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.523171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.523186 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.523211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.523230 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.625631 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.625685 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.625697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.625714 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.625726 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.729167 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.729236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.729254 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.729280 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.729300 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.831977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.832299 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.832403 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.832495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.832574 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.936121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.936288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.936317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.936402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:57 crc kubenswrapper[4979]: I0130 21:40:57.936432 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:57Z","lastTransitionTime":"2026-01-30T21:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.039355 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.039392 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.039400 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.039415 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.039425 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.047739 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:08:13.667236872 +0000 UTC Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.069266 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:40:58 crc kubenswrapper[4979]: E0130 21:40:58.069752 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.142544 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.142836 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.142937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.143012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.143107 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.245788 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.245842 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.245854 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.245874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.245888 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.348408 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.348673 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.348780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.348873 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.348947 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.452898 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.452976 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.453001 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.453090 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.453114 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.557171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.557279 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.557302 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.557328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.557347 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.660777 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.660837 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.660954 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.660978 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.660992 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.764329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.764425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.764450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.764492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.764517 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.866972 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.867368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.867448 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.867509 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.867565 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.971425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.971502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.971525 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.971552 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:58 crc kubenswrapper[4979]: I0130 21:40:58.971568 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:58Z","lastTransitionTime":"2026-01-30T21:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.048553 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:40:57.075236711 +0000 UTC Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.069117 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.069241 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.069341 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:40:59 crc kubenswrapper[4979]: E0130 21:40:59.069547 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:40:59 crc kubenswrapper[4979]: E0130 21:40:59.069700 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:40:59 crc kubenswrapper[4979]: E0130 21:40:59.069831 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.073803 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.073866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.073879 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.073897 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.073907 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.177016 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.177986 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.178107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.178286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.178378 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.281104 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.281161 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.281174 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.281204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.281217 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.384314 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.384355 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.384365 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.384384 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.384396 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.487128 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.487200 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.487211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.487228 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.487238 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.573012 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.588339 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.590811 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.590942 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.591001 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.591081 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.591139 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.597577 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.622139 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.640834 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.659021 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.671312 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.688262 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.694363 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.694413 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.694425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.694446 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.694462 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.705482 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.728908 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.751383 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.766103 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.779164 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.795004 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.797368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.797439 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.797460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.797486 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.797505 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.811815 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.829438 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.851975 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.866966 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.901280 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.901812 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.901897 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.901975 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:40:59 crc kubenswrapper[4979]: I0130 21:40:59.902082 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:40:59Z","lastTransitionTime":"2026-01-30T21:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.004939 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.004983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.004994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.005012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.005025 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.049183 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:01:53.444733683 +0000 UTC Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.068843 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:00 crc kubenswrapper[4979]: E0130 21:41:00.069010 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.108885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.108946 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.108964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.108991 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.109011 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.211892 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.211954 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.211968 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.211989 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.212002 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.316154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.316205 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.316215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.316233 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.316244 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.419188 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.419239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.419251 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.419271 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.419283 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.522869 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.522945 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.522959 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.522977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.522987 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.627981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.628066 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.628079 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.628112 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.628131 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.731648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.731704 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.731722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.731750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.731769 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.836345 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.836402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.836424 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.836450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.836468 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.939997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.940098 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.940123 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.940154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:00 crc kubenswrapper[4979]: I0130 21:41:00.940176 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:00Z","lastTransitionTime":"2026-01-30T21:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.044070 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.044131 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.044150 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.044177 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.044210 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.049484 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:42:57.003470457 +0000 UTC Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.069095 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.069128 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.069248 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:01 crc kubenswrapper[4979]: E0130 21:41:01.069425 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:01 crc kubenswrapper[4979]: E0130 21:41:01.069501 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:01 crc kubenswrapper[4979]: E0130 21:41:01.069625 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.148839 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.148925 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.148947 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.148983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.149006 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.252230 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.252287 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.252299 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.252319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.252334 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.354810 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.354846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.354857 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.354872 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.354881 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.457664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.457744 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.457768 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.457796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.457819 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.560386 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.560448 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.560464 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.560489 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.560506 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.663658 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.663722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.663732 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.663751 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.663762 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.767660 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.767794 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.767812 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.767839 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.768092 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.870881 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.870957 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.870970 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.871006 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.871020 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.973840 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.973893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.973907 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.973925 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:01 crc kubenswrapper[4979]: I0130 21:41:01.973938 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:01Z","lastTransitionTime":"2026-01-30T21:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.049945 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:49:10.205645296 +0000 UTC Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.069618 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.069816 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.076650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.076688 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.076700 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.076719 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.076733 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.181406 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.181473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.181485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.181510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.181532 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.284709 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.284780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.284800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.284829 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.284872 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.387983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.388061 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.388073 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.388094 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.388103 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.485250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.485319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.485340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.485369 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.485388 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.508112 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.513292 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.513368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.513396 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.513431 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.513456 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.532634 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.538643 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.538699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.538745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.538770 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.538785 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.560067 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.564446 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.564493 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.564510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.564532 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.564550 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.580812 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.584932 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.584966 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.584978 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.584998 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.585011 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.621237 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:02 crc kubenswrapper[4979]: E0130 21:41:02.621356 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.622976 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.623002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.623010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.623045 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.623058 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.725785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.725887 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.725902 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.725920 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.725931 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.829282 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.829337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.829350 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.829377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.829393 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.932265 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.932311 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.932323 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.932341 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:02 crc kubenswrapper[4979]: I0130 21:41:02.932353 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:02Z","lastTransitionTime":"2026-01-30T21:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.035157 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.035203 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.035215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.035236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.035249 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.051142 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:09:33.504697604 +0000 UTC Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.069232 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:03 crc kubenswrapper[4979]: E0130 21:41:03.069374 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.069232 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.069559 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:03 crc kubenswrapper[4979]: E0130 21:41:03.069583 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:03 crc kubenswrapper[4979]: E0130 21:41:03.069864 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.140164 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.140214 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.140223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.140244 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.140257 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.243209 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.243311 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.243342 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.243381 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.243409 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.346818 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.346887 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.346906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.346934 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.346953 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.449834 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.449871 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.449880 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.449895 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.449905 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.552815 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.552851 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.552862 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.552879 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.552892 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.655829 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.655862 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.655874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.655892 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.655902 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.759214 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.759275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.759284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.759307 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.759324 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.861287 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.861328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.861336 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.861353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.861365 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.964996 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.965077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.965117 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.965137 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:03 crc kubenswrapper[4979]: I0130 21:41:03.965147 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:03Z","lastTransitionTime":"2026-01-30T21:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.051350 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:09:16.228308717 +0000 UTC Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068303 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068360 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068374 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068394 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068437 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.068707 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:04 crc kubenswrapper[4979]: E0130 21:41:04.068912 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.172132 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.172192 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.172203 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.172224 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.172236 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.275842 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.275906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.275924 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.275952 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.275970 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.379325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.379387 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.379405 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.379431 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.379452 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.483722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.483802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.483820 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.483850 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.483868 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.587105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.587158 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.587170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.587193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.587203 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.691804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.691866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.691885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.691910 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.691929 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.795449 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.795502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.795520 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.795548 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.795570 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.898959 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.899102 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.899134 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.899169 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:04 crc kubenswrapper[4979]: I0130 21:41:04.899197 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:04Z","lastTransitionTime":"2026-01-30T21:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.002080 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.002136 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.002147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.002166 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.002177 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.052402 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:47:49.476597184 +0000 UTC Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.071231 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.071357 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:05 crc kubenswrapper[4979]: E0130 21:41:05.071520 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:05 crc kubenswrapper[4979]: E0130 21:41:05.071721 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.071833 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:05 crc kubenswrapper[4979]: E0130 21:41:05.071920 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.087165 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.099742 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.105146 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.105256 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.105328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.105378 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.105450 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.112237 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.123619 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.137417 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.152990 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.167222 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.182357 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.200635 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.208721 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.208766 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.208780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.208801 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.208815 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.212308 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.221644 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.236727 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.250299 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.265496 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.279985 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.298631 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.312099 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.312171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.312197 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.312232 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.312258 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.321001 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.414851 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.414899 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.414908 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.414926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.414937 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.517855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.517907 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.517921 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.517943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.517957 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.621377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.621426 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.621436 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.621457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.621468 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.724744 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.724805 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.724822 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.724849 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.724867 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.828335 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.828393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.828403 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.828425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.828438 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.931715 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.931767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.931780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.931800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:05 crc kubenswrapper[4979]: I0130 21:41:05.931817 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:05Z","lastTransitionTime":"2026-01-30T21:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.035297 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.035384 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.035404 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.035433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.035459 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.053113 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:22:26.350813452 +0000 UTC Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.069740 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:06 crc kubenswrapper[4979]: E0130 21:41:06.069950 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.139190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.139232 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.139242 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.139260 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.139273 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.241962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.242033 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.242107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.242174 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.242192 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.345329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.345385 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.345399 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.345422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.345435 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.448396 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.448448 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.448457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.448476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.448487 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.551422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.551462 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.551471 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.551484 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.551513 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.654516 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.654567 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.654581 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.654607 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.654621 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.758522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.758602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.758621 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.758651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.758677 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.862515 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.862571 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.862581 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.862603 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.862615 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.965601 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.965668 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.965683 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.965703 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:06 crc kubenswrapper[4979]: I0130 21:41:06.965716 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:06Z","lastTransitionTime":"2026-01-30T21:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.053672 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:52:15.992600455 +0000 UTC Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.068707 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.068813 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:07 crc kubenswrapper[4979]: E0130 21:41:07.068896 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:07 crc kubenswrapper[4979]: E0130 21:41:07.069102 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.069183 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:07 crc kubenswrapper[4979]: E0130 21:41:07.069360 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.071238 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.071288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.071301 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.071315 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.071325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.173969 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.174014 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.174052 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.174074 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.174086 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.277572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.277616 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.277628 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.277646 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.277655 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.380242 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.380303 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.380315 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.380332 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.380343 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.483139 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.483182 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.483191 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.483205 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.483217 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.587320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.587380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.587395 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.587418 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.587434 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.691071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.691124 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.691137 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.691159 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.691173 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.794364 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.794414 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.794426 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.794484 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.794495 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.897181 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.897245 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.897262 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.897283 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:07 crc kubenswrapper[4979]: I0130 21:41:07.897296 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:07Z","lastTransitionTime":"2026-01-30T21:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.000420 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.000489 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.000501 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.000529 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.000544 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.054264 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:15:15.985600763 +0000 UTC Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.069708 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:08 crc kubenswrapper[4979]: E0130 21:41:08.069958 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.104252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.104309 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.104322 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.104340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.104355 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.206982 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.207078 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.207093 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.207114 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.207127 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.311447 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.311491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.311503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.311523 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.311537 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.416002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.416093 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.416107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.416130 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.416208 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.519108 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.519179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.519194 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.519217 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.519231 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.621915 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.621974 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.621984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.622004 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.622017 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.726223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.726290 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.726302 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.726325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.726340 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.829449 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.829490 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.829498 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.829517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.829562 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.933605 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.933687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.933710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.933762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:08 crc kubenswrapper[4979]: I0130 21:41:08.933788 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:08Z","lastTransitionTime":"2026-01-30T21:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.037835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.037892 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.037908 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.037933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.037950 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.055497 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:41:40.129419538 +0000 UTC Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.069182 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:09 crc kubenswrapper[4979]: E0130 21:41:09.069365 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.069605 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.069643 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:09 crc kubenswrapper[4979]: E0130 21:41:09.069711 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:09 crc kubenswrapper[4979]: E0130 21:41:09.069894 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.070892 4979 scope.go:117] "RemoveContainer" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" Jan 30 21:41:09 crc kubenswrapper[4979]: E0130 21:41:09.071370 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.141064 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.141115 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.141125 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.141143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.141154 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.245225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.245779 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.245792 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.245818 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.245831 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.348883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.348931 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.348943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.348964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.348974 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.451590 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.451623 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.451634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.451650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.451661 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.554643 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.554710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.554724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.554749 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.554764 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.658429 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.658486 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.658500 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.658525 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.658538 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.761737 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.761797 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.761807 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.761826 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.761837 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.864564 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.864605 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.864616 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.864636 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.864648 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.968021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.968110 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.968121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.968140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:09 crc kubenswrapper[4979]: I0130 21:41:09.968155 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:09Z","lastTransitionTime":"2026-01-30T21:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.055690 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:51:42.34484665 +0000 UTC Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.069298 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:10 crc kubenswrapper[4979]: E0130 21:41:10.069462 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.071048 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.071072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.071083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.071096 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.071108 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.173200 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.173248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.173258 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.173277 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.173287 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.277582 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.277632 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.277645 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.277665 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.277678 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.380980 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.381027 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.381056 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.381080 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.381091 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.483722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.483763 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.483775 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.483790 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.483807 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.586141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.586184 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.586193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.586209 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.586219 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.690736 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.690787 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.690797 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.690816 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.690826 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.794664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.794729 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.794742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.794767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.794789 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.897922 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.897988 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.898002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.898025 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:10 crc kubenswrapper[4979]: I0130 21:41:10.898086 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:10Z","lastTransitionTime":"2026-01-30T21:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.001806 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.001869 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.001883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.001904 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.001916 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.056496 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:46:54.355134177 +0000 UTC Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.069279 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:11 crc kubenswrapper[4979]: E0130 21:41:11.069453 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.069604 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:11 crc kubenswrapper[4979]: E0130 21:41:11.069930 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.070028 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:11 crc kubenswrapper[4979]: E0130 21:41:11.070338 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.105228 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.105322 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.105350 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.105390 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.105413 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.208921 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.208973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.208984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.209007 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.209019 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.312246 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.312293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.312303 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.312325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.312336 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.414608 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.414667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.414679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.414702 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.414717 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.517324 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.517377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.517388 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.517408 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.517423 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.619987 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.620060 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.620073 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.620093 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.620104 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.722554 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.722601 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.722614 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.722633 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.722648 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.825745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.825785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.825796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.825815 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.825827 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.928176 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.928215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.928225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.928242 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:11 crc kubenswrapper[4979]: I0130 21:41:11.928253 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:11Z","lastTransitionTime":"2026-01-30T21:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.032399 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.032454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.032463 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.032480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.032489 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.057361 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:47:39.279275965 +0000 UTC Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.068692 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.068837 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.135392 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.135439 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.135449 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.135469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.135480 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.238153 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.238193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.238211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.238231 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.238244 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.341419 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.341460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.341469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.341486 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.341499 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.444304 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.444376 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.444394 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.444424 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.444446 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.546501 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.546553 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.546565 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.546584 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.546595 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.649256 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.649304 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.649313 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.649332 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.649347 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.666132 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.666167 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.666179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.666196 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.666221 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.684590 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.685287 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.685420 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.685467 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:44.685453252 +0000 UTC m=+100.646700285 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.688672 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.688700 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.688709 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.688728 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.688740 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.703146 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.707184 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.707239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.707253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.707276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.707287 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.721102 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.725799 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.725835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.725851 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.725869 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.725882 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.743769 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.747712 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.747748 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.747758 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.747775 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.747785 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.761993 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:12 crc kubenswrapper[4979]: E0130 21:41:12.762171 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.764071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.764103 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.764113 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.764140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.764152 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.867573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.867652 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.867672 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.867700 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.867718 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.969841 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.969893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.969903 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.969921 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:12 crc kubenswrapper[4979]: I0130 21:41:12.969932 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:12Z","lastTransitionTime":"2026-01-30T21:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.058210 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 14:07:30.108631439 +0000 UTC Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.069781 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.069878 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:13 crc kubenswrapper[4979]: E0130 21:41:13.069983 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.069796 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:13 crc kubenswrapper[4979]: E0130 21:41:13.070298 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:13 crc kubenswrapper[4979]: E0130 21:41:13.070178 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.072224 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.072271 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.072282 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.072300 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.072315 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.175427 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.175480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.175491 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.175510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.175523 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.278693 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.278763 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.278776 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.278802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.278817 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.382376 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.382433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.382443 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.382460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.382478 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.485377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.485423 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.485432 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.485447 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.485457 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.588784 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.588831 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.588844 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.588862 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.588873 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.691969 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.692017 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.692050 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.692073 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.692087 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.795346 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.795392 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.795403 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.795422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.795436 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.897682 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.897740 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.897752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.897767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:13 crc kubenswrapper[4979]: I0130 21:41:13.897778 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:13Z","lastTransitionTime":"2026-01-30T21:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.000858 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.000906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.000916 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.000937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.000949 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.059422 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:53:09.857021026 +0000 UTC Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.068731 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:14 crc kubenswrapper[4979]: E0130 21:41:14.068941 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.103831 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.103893 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.103905 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.103927 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.103938 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.206590 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.206651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.206668 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.206696 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.206715 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.310105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.310156 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.310171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.310194 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.310209 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.412879 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.412949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.412968 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.412998 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.413022 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.516199 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.516250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.516263 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.516283 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.516297 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.535828 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/0.log" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.536277 4979 generic.go:334] "Generic (PLEG): container finished" podID="6722e8df-a635-4808-b6b9-d5633fc3d34b" containerID="553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7" exitCode=1 Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.536326 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerDied","Data":"553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.537109 4979 scope.go:117] "RemoveContainer" containerID="553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.552669 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.569361 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.584421 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.602701 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.618248 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.619958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.620015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.620055 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.620083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.620100 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.633151 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.649587 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.664253 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.680632 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.695015 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.714330 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.722888 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.722949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.722960 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.722981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.722994 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.727378 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.740249 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.759700 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.776269 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.791616 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.805788 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:14Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.826241 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.826285 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.826295 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.826314 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.826325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.929736 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.929787 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.929797 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.929814 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:14 crc kubenswrapper[4979]: I0130 21:41:14.929826 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:14Z","lastTransitionTime":"2026-01-30T21:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.032965 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.033312 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.033342 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.033366 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.033395 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.059968 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:46:31.754328736 +0000 UTC Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.069091 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.069140 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.069149 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:15 crc kubenswrapper[4979]: E0130 21:41:15.069251 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:15 crc kubenswrapper[4979]: E0130 21:41:15.069466 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:15 crc kubenswrapper[4979]: E0130 21:41:15.069493 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.085090 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.099637 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.114180 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.127976 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.139918 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.141632 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.141667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.141681 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.141701 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.141714 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.152362 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.163967 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.175171 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.193610 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.209217 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.221442 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.236259 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.244537 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.244587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.244603 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.244625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.244645 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.255008 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.271768 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.294197 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.308631 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.322145 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.348423 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.348484 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.348501 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.348524 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.348540 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.451272 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.451322 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.451334 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.451353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.451368 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.542720 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/0.log" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.542779 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerStarted","Data":"94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.554020 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.554418 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.554515 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.554652 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.554736 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.561981 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.579715 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.595854 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.610174 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.623534 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.633815 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.646730 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.658151 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.658226 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.658243 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.658273 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.658291 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.659805 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.677021 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.693692 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.706640 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.720677 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.733537 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.747806 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.760717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.760752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.760763 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.760781 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.760792 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.763141 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.778735 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.798876 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:15Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.863654 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.863695 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.863705 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.863723 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.863735 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.966569 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.966613 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.966623 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.966640 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:15 crc kubenswrapper[4979]: I0130 21:41:15.966650 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:15Z","lastTransitionTime":"2026-01-30T21:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.060308 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:21:53.672758142 +0000 UTC Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069071 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069376 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069507 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069520 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069532 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.069544 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: E0130 21:41:16.069754 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.172838 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.172906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.172919 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.172943 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.172956 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.275486 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.275522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.275531 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.275576 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.275587 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.378337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.378380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.378392 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.378414 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.378431 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.481634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.481691 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.481704 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.481729 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.481749 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.584077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.584137 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.584147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.584166 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.584176 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.686929 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.686973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.686983 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.687002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.687014 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.789984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.790051 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.790061 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.790082 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.790092 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.892225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.892265 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.892275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.892292 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.892302 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.995433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.995479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.995494 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.995514 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:16 crc kubenswrapper[4979]: I0130 21:41:16.995528 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:16Z","lastTransitionTime":"2026-01-30T21:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.061857 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:30:50.199419506 +0000 UTC Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.069281 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.069342 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.069374 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:17 crc kubenswrapper[4979]: E0130 21:41:17.069439 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:17 crc kubenswrapper[4979]: E0130 21:41:17.069476 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:17 crc kubenswrapper[4979]: E0130 21:41:17.069541 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.098067 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.098113 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.098131 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.098149 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.098161 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.200855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.200902 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.200913 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.200933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.200946 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.303325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.303375 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.303388 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.303410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.303427 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.406698 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.406738 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.406749 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.406770 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.406780 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.508667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.508723 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.508735 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.508755 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.508768 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.611846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.611913 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.611923 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.611942 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.611953 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.714456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.714506 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.714520 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.714543 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.714557 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.818130 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.818226 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.818237 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.818255 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.818265 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.920958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.921007 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.921021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.921057 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:17 crc kubenswrapper[4979]: I0130 21:41:17.921104 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:17Z","lastTransitionTime":"2026-01-30T21:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.024454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.024509 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.024522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.024543 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.024556 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.062007 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:57:47.371752162 +0000 UTC Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.068800 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:18 crc kubenswrapper[4979]: E0130 21:41:18.069144 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.085827 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.127559 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.127606 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.127618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.127634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.127648 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.231078 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.231136 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.231150 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.231173 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.231190 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.334364 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.334423 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.334434 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.334454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.334466 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.438170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.438285 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.438296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.438320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.438332 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.544492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.544588 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.544650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.544717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.544913 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.648088 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.648155 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.648168 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.648189 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.648207 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.751135 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.751208 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.751221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.751240 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.751253 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.854121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.854165 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.854178 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.854195 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.854208 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.956834 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.956891 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.956900 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.956917 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:18 crc kubenswrapper[4979]: I0130 21:41:18.956927 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:18Z","lastTransitionTime":"2026-01-30T21:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.059953 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.059994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.060002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.060022 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.060049 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.063187 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:46:36.203250222 +0000 UTC Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.069518 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.069598 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.069529 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:19 crc kubenswrapper[4979]: E0130 21:41:19.069650 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:19 crc kubenswrapper[4979]: E0130 21:41:19.069722 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:19 crc kubenswrapper[4979]: E0130 21:41:19.069802 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.163459 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.163510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.163521 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.163542 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.163562 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.266338 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.266400 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.266412 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.266428 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.266443 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.369605 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.369652 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.369660 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.369679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.369690 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.472390 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.472461 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.472479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.472506 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.472525 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.574906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.574956 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.574973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.574994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.575007 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.677981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.678101 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.678137 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.678172 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.678198 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.780795 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.780865 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.780883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.780911 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.780933 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.884422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.884477 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.884489 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.884510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.884525 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.988927 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.989002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.989024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.989114 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:19 crc kubenswrapper[4979]: I0130 21:41:19.989168 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:19Z","lastTransitionTime":"2026-01-30T21:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.063411 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:07:43.9219475 +0000 UTC Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.068959 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:20 crc kubenswrapper[4979]: E0130 21:41:20.069206 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.092171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.092211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.092253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.092273 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.092286 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.194835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.194885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.194897 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.194939 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.194954 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.297879 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.297925 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.297936 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.297951 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.297961 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.401253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.401300 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.401312 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.401333 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.401349 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.504848 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.504905 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.504916 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.504938 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.504952 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.608380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.608439 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.608448 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.608468 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.608482 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.711708 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.711796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.711821 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.711854 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.711880 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.814972 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.815056 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.815073 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.815096 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.815109 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.919299 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.919370 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.919402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.919433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:20 crc kubenswrapper[4979]: I0130 21:41:20.919453 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:20Z","lastTransitionTime":"2026-01-30T21:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.021563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.021607 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.021617 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.021633 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.021643 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.064383 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:54:28.438074626 +0000 UTC Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.068730 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.068767 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.068837 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:21 crc kubenswrapper[4979]: E0130 21:41:21.068935 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:21 crc kubenswrapper[4979]: E0130 21:41:21.069085 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:21 crc kubenswrapper[4979]: E0130 21:41:21.069512 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.069834 4979 scope.go:117] "RemoveContainer" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.124341 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.124887 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.124910 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.124933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.124952 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.228920 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.228994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.229012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.229069 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.229117 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.332909 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.332949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.332964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.332987 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.333004 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.435539 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.435588 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.435601 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.435622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.435638 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.538744 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.538801 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.538812 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.538835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.538847 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.566514 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/2.log" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.570415 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.571401 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.595412 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.608873 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.632271 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.641021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.641091 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.641105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.641147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.641161 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.652791 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.682788 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.705332 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.726211 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744706 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744721 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744757 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.744780 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.762103 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.778292 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3dd3ec-1102-4c07-82da-104ad61fd41f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.794624 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.810935 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.827747 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.842102 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.850271 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.850330 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.850343 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.850365 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.850382 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.866869 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.884090 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.901157 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.914524 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.953791 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.953849 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.953861 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.953876 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:21 crc kubenswrapper[4979]: I0130 21:41:21.953887 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:21Z","lastTransitionTime":"2026-01-30T21:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.056413 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.056464 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.056474 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.056495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.056507 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.065055 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:56:32.466463254 +0000 UTC Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.069414 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:22 crc kubenswrapper[4979]: E0130 21:41:22.069660 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.159476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.159521 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.159533 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.159552 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.159564 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.263066 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.263116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.263128 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.263148 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.263162 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.366557 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.366634 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.366657 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.366687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.366704 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.469391 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.469443 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.469456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.469474 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.469487 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.571992 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.572072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.572087 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.572108 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.572122 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.576082 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/3.log" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.576683 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/2.log" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.580184 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" exitCode=1 Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.580259 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.580354 4979 scope.go:117] "RemoveContainer" containerID="15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.581197 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:41:22 crc kubenswrapper[4979]: E0130 21:41:22.581390 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.599122 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.612110 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.626948 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.641917 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.660794 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.675297 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.675329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.675340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.675359 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.675372 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.690625 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.706077 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.722013 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.738524 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.752694 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.766949 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.781664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.781730 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.781747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.781774 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.781793 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.788330 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15fc3d74444ce8f5480a191593955a7debfd416c97cb0335cb0eda0031824d90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:40:55Z\\\",\\\"message\\\":\\\": olm-operator,},ClusterIP:10.217.5.168,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.168],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0130 21:40:55.034758 6641 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:40\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:22Z\\\",\\\"message\\\":\\\"t:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 21:41:22.154367 7040 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-pk47q] creating logical port openshift-multus_network-metrics-daemon-pk47q for pod on switch crc\\\\nF0130 21:41:22.154431 7040 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.801484 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.813356 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.823704 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3dd3ec-1102-4c07-82da-104ad61fd41f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.839662 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.855957 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.867497 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.885876 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.885926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.885937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.885954 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.885964 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.988562 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.988618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.988629 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.988648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:22 crc kubenswrapper[4979]: I0130 21:41:22.988658 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:22Z","lastTransitionTime":"2026-01-30T21:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.065463 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:09:22.620632317 +0000 UTC Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.068933 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.069006 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.069142 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.068947 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.069315 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.069378 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.091247 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.091283 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.091296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.091313 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.091325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.122065 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.122119 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.122129 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.122147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.122159 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.142433 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.145722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.145762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.145774 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.145794 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.145806 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.163361 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.168155 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.168189 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.168200 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.168218 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.168228 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.184935 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.189728 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.189968 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.189977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.189997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.190012 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.204764 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.208248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.208284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.208293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.208312 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.208325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.221224 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.221408 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.223113 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.223150 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.223161 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.223181 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.223195 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.325548 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.325585 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.325594 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.325609 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.325620 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.428467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.428520 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.428531 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.428548 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.428561 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.531837 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.531919 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.531940 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.531972 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.531997 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.586416 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/3.log" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.592570 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:41:23 crc kubenswrapper[4979]: E0130 21:41:23.592902 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.610395 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.631659 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.636629 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.637213 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.637247 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.637281 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.637306 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.664131 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:22Z\\\",\\\"message\\\":\\\"t:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 21:41:22.154367 7040 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-pk47q] creating logical port openshift-multus_network-metrics-daemon-pk47q for pod on switch crc\\\\nF0130 21:41:22.154431 7040 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:41:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.679592 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.695747 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.713280 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3dd3ec-1102-4c07-82da-104ad61fd41f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.731834 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.740138 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.740186 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.740199 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.740218 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.740232 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.751579 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.768423 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.783798 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.800153 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.817462 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.835093 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.842747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.842811 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.842825 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.842846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.842889 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.851519 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.868113 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.879995 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.894281 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.909371 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:23Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.945971 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.946025 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.948213 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.948345 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:23 crc kubenswrapper[4979]: I0130 21:41:23.948374 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:23Z","lastTransitionTime":"2026-01-30T21:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.052482 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.052541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.052556 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.052585 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.052605 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.066201 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:09:49.838978519 +0000 UTC Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.069651 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:24 crc kubenswrapper[4979]: E0130 21:41:24.069896 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.155871 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.155946 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.155962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.155984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.156002 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.260995 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.261083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.261097 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.261121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.261136 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.363952 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.364012 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.364071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.364102 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.364120 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.467107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.467154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.467164 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.467179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.467190 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.570538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.570641 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.570660 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.570687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.570720 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.673417 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.673481 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.673499 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.673530 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.673549 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.777513 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.777563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.777572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.777592 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.777605 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.879649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.879710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.879724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.879747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.879761 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.983961 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.984024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.984055 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.984077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:24 crc kubenswrapper[4979]: I0130 21:41:24.984091 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:24Z","lastTransitionTime":"2026-01-30T21:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.067138 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:14:10.597160908 +0000 UTC Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.069604 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.069705 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.069767 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:25 crc kubenswrapper[4979]: E0130 21:41:25.069936 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:25 crc kubenswrapper[4979]: E0130 21:41:25.071357 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:25 crc kubenswrapper[4979]: E0130 21:41:25.071488 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.087675 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.087731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.087742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.087762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.087775 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.090406 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.114281 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.134822 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.153938 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.172625 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.187247 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.190270 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.190314 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.190326 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.190345 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.190357 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.202411 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.216175 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.233310 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.249849 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.261655 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.275438 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3dd3ec-1102-4c07-82da-104ad61fd41f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.292432 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.293338 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.293416 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.293446 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.293476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.293498 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.305687 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.318984 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.347988 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:22Z\\\",\\\"message\\\":\\\"t:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 21:41:22.154367 7040 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-pk47q] creating logical port openshift-multus_network-metrics-daemon-pk47q for pod on switch crc\\\\nF0130 21:41:22.154431 7040 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:41:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.362224 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.375998 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:25Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.397253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.397325 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.397339 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.397362 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.397379 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.500254 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.500315 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.500329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.500350 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.500365 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.603706 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.603960 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.603977 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.603997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.604012 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.706435 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.706495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.706513 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.706536 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.706556 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.809433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.809466 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.809474 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.809489 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.809499 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.912573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.912622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.912635 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.912653 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:25 crc kubenswrapper[4979]: I0130 21:41:25.912666 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:25Z","lastTransitionTime":"2026-01-30T21:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.016219 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.016295 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.016306 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.016323 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.016337 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.068110 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:41:23.597023925 +0000 UTC Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.069394 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:26 crc kubenswrapper[4979]: E0130 21:41:26.069579 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.120757 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.120838 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.120862 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.120894 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.120918 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.223878 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.223937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.223953 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.223973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.223986 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.326450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.326505 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.326519 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.326539 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.326554 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.429919 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.430002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.430022 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.430089 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.430110 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.532811 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.532888 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.532906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.532933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.532949 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.635395 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.635474 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.635493 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.635525 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.635544 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.739575 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.739650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.739669 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.739699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.739719 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.842856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.842918 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.842937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.842963 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.843077 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.945946 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.946010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.946024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.946059 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:26 crc kubenswrapper[4979]: I0130 21:41:26.946074 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:26Z","lastTransitionTime":"2026-01-30T21:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.048980 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.049024 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.049054 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.049074 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.049086 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.069118 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:21:13.3025775 +0000 UTC Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.069449 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.069522 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.069603 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:27 crc kubenswrapper[4979]: E0130 21:41:27.069750 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:27 crc kubenswrapper[4979]: E0130 21:41:27.069849 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:27 crc kubenswrapper[4979]: E0130 21:41:27.070050 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.152152 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.152198 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.152208 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.152226 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.152240 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.254670 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.254724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.254733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.254752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.254763 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.358147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.358219 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.358233 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.358257 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.358271 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.460961 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.461053 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.461072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.461099 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.461120 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.565133 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.565204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.565221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.565249 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.565269 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.668460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.668538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.668560 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.668591 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.668613 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.772617 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.772690 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.772705 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.772726 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.772739 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.875620 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.875681 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.875698 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.875723 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.875742 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.978671 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.978780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.978800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.978830 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:27 crc kubenswrapper[4979]: I0130 21:41:27.978850 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:27Z","lastTransitionTime":"2026-01-30T21:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.068648 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.068814 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.069700 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:18:22.095360666 +0000 UTC Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.082227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.082310 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.082333 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.082359 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.082380 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.184863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.184911 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.184922 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.184941 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.184962 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.287791 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.287855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.287874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.287902 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.287919 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.390713 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.390770 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.390782 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.390800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.390816 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.493781 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.493856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.493868 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.493889 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.493902 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.597252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.597319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.597329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.597347 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.597358 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.700770 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.700828 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.700853 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.700877 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.700889 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.803254 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.803323 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.803347 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.803380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.803406 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.875386 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.875498 4979 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.875583 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.875557979 +0000 UTC m=+148.836805012 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.875500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.875616 4979 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.875746 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.875726134 +0000 UTC m=+148.836973187 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.906650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.906693 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.906707 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.906724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.906734 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:28Z","lastTransitionTime":"2026-01-30T21:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.976155 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976347 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.976315485 +0000 UTC m=+148.937562538 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.976397 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:28 crc kubenswrapper[4979]: I0130 21:41:28.976456 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976557 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976573 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976584 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976614 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976625 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.976617853 +0000 UTC m=+148.937864886 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976635 4979 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976662 4979 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:41:28 crc kubenswrapper[4979]: E0130 21:41:28.976707 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.976692595 +0000 UTC m=+148.937939638 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.009810 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.009867 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.009883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.009902 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.009935 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.069477 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.069595 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.069656 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:29 crc kubenswrapper[4979]: E0130 21:41:29.069813 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.069871 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:31:24.77673221 +0000 UTC Jan 30 21:41:29 crc kubenswrapper[4979]: E0130 21:41:29.069971 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:29 crc kubenswrapper[4979]: E0130 21:41:29.070102 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.113464 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.113534 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.113546 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.113567 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.113604 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.216923 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.217074 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.217103 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.217140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.217160 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.321502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.321565 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.321580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.321599 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.321612 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.424403 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.424443 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.424456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.424476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.424489 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.527085 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.527147 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.527162 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.527181 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.527197 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.629582 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.629621 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.629631 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.629647 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.629657 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.732445 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.732503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.732520 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.732547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.732564 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.835503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.835598 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.835623 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.835654 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.835678 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.939686 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.939764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.939780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.939804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:29 crc kubenswrapper[4979]: I0130 21:41:29.939820 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:29Z","lastTransitionTime":"2026-01-30T21:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.042651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.042706 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.042720 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.042742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.042760 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.069499 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:30 crc kubenswrapper[4979]: E0130 21:41:30.069692 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.070438 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:04:17.804079997 +0000 UTC Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.145179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.145233 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.145248 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.145266 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.145278 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.248657 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.248716 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.248727 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.248745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.248757 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.351295 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.351395 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.351406 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.351426 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.351438 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.454564 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.454631 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.454664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.454699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.454720 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.557805 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.557881 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.557906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.557937 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.557959 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.661768 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.661847 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.661884 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.661918 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.661942 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.765200 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.765288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.765301 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.765322 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.765340 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.868250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.868337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.868357 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.868383 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.868398 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.971781 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.971855 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.971870 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.971896 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:30 crc kubenswrapper[4979]: I0130 21:41:30.971911 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:30Z","lastTransitionTime":"2026-01-30T21:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.068848 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:31 crc kubenswrapper[4979]: E0130 21:41:31.069079 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.069283 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:31 crc kubenswrapper[4979]: E0130 21:41:31.069411 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.069508 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:31 crc kubenswrapper[4979]: E0130 21:41:31.069691 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.070883 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:38:26.446623235 +0000 UTC Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.073878 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.073926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.073942 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.073965 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.073987 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.176904 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.176970 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.176988 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.177022 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.177074 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.280296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.280389 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.280410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.280441 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.280464 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.384630 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.384672 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.384686 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.384704 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.384713 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.488470 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.488528 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.488547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.488570 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.488584 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.593327 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.593914 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.593936 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.593972 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.593992 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.697462 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.697530 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.697548 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.697572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.697596 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.800196 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.800578 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.800659 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.800755 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.800869 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.904933 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.904981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.904994 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.905015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:31 crc kubenswrapper[4979]: I0130 21:41:31.905049 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:31Z","lastTransitionTime":"2026-01-30T21:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.008618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.008724 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.008750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.008786 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.008830 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.069233 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:32 crc kubenswrapper[4979]: E0130 21:41:32.069815 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.071203 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 00:41:59.811260264 +0000 UTC Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.112058 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.112107 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.112119 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.112139 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.112152 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.215276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.215336 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.215351 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.215372 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.215384 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.318648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.318737 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.318753 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.318776 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.318789 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.422484 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.422998 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.423259 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.423462 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.423611 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.527708 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.527785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.527805 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.527836 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.527857 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.630340 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.630399 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.630411 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.630437 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.630450 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.732589 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.733077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.733227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.733329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.733406 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.837465 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.837846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.838265 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.838503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.838639 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.941572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.941648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.941667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.941697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:32 crc kubenswrapper[4979]: I0130 21:41:32.941718 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:32Z","lastTransitionTime":"2026-01-30T21:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.045114 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.045179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.045237 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.045264 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.045283 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.069026 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.069152 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.069074 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.069328 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.069440 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.069540 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.072003 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:32:53.950043257 +0000 UTC Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.147990 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.148470 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.148514 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.148547 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.148573 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.251102 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.251141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.251152 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.251170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.251183 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.355190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.355238 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.355252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.355282 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.355298 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.450284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.450350 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.450370 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.450398 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.450416 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.475732 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.483066 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.483163 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.483180 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.483204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.483218 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.498117 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.503679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.503925 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.504013 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.504177 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.504278 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.520382 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.525600 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.525650 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.525664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.525711 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.525728 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.546658 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.551762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.551819 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.551831 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.551856 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.551873 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.571621 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:33Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:33 crc kubenswrapper[4979]: E0130 21:41:33.571757 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.574737 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.574824 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.574843 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.574874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.574892 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.678566 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.678629 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.678649 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.678679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.678710 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.782772 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.782824 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.782838 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.782858 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.782874 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.886244 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.886329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.886349 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.886377 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.886399 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.988473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.988542 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.988562 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.988591 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:33 crc kubenswrapper[4979]: I0130 21:41:33.988608 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:33Z","lastTransitionTime":"2026-01-30T21:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.069276 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:34 crc kubenswrapper[4979]: E0130 21:41:34.069442 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.073419 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:23:18.157942087 +0000 UTC Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.091653 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.091813 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.091910 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.092017 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.092164 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.195767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.195847 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.195870 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.195899 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.195918 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.298803 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.298857 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.298870 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.298887 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.298898 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.402668 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.402745 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.402782 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.402816 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.402839 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.506577 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.506707 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.506727 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.506761 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.506779 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.610613 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.610678 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.610704 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.610738 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.610805 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.714523 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.714951 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.715110 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.715239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.715355 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.818177 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.818232 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.818242 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.818259 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.818272 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.921482 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.921559 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.921574 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.921599 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:34 crc kubenswrapper[4979]: I0130 21:41:34.921614 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:34Z","lastTransitionTime":"2026-01-30T21:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.025817 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.025881 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.025900 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.025928 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.025947 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.068876 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.068876 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.069108 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:35 crc kubenswrapper[4979]: E0130 21:41:35.069262 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:35 crc kubenswrapper[4979]: E0130 21:41:35.069546 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:35 crc kubenswrapper[4979]: E0130 21:41:35.069782 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.073760 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:43:01.876710483 +0000 UTC Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.085988 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.107132 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c4a287be242083cf80af657b8302d31508de68fd02eab25238508bc8e58490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9276f41b8d5c19dd7bcb9e134abc196b5001d74f132824170885e7050c62220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.129529 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.129600 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.129617 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.129643 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.129659 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.140699 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34ce4851-1ecc-47da-89ca-09894eb0908a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:22Z\\\",\\\"message\\\":\\\"t:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 21:41:22.154367 7040 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-pk47q] creating logical port openshift-multus_network-metrics-daemon-pk47q for pod on switch crc\\\\nF0130 21:41:22.154431 7040 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:41:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gg6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jttsv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.158561 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7a0992-0b0f-4219-ac47-fb6021840903\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e19a08d3bd21dd7d703b0b2dc497952463aac9a91713a55e42829063fc619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91063ccd02e20f67d270a7ce1927ae0cc693de9286ceb120b574052a792c3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vl5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xz6s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.172937 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pk47q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0632938-c88a-4c22-b0e7-8f7473532f07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbmzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pk47q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.192685 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3dd3ec-1102-4c07-82da-104ad61fd41f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b29baff6b64ae9923eb8c3a9824c90722fe24521d52c2842e6ed50404f0264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d3f1e5db128d4865df3cd7f7a0b575a8d5a50ebfa461d2092ce56a9310160db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.217113 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.233479 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.233510 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.233521 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.233538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.233550 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.234375 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:40:09Z\\\",\\\"message\\\":\\\"W0130 21:40:08.348469 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 21:40:08.351441 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769809208 cert, and key in /tmp/serving-cert-1230760530/serving-signer.crt, /tmp/serving-cert-1230760530/serving-signer.key\\\\nI0130 21:40:08.670128 1 observer_polling.go:159] Starting file observer\\\\nW0130 21:40:08.674061 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 21:40:08.674226 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:40:08.675003 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1230760530/tls.crt::/tmp/serving-cert-1230760530/tls.key\\\\\\\"\\\\nF0130 21:40:09.138832 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.256488 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cad92385165a732f139e7c56dc1a40f6187d67636513d99df4fa230078442232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.276156 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a30401e3-0116-4c76-abd5-b4b07f03a278\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1efd51069979bbe474ee54db9440c9515536778f71ac30bd732d7d932af299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9862753c1bffbba46a14197c32735cedae6571638a4cc1b720b84a53ca7fd8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901cfa0cbe2f6d874e276b00bcb0bf962ea70f4c822d964813fe72bf21a47141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fc9e0f5925a3c3aae3759e5ddd92f0e9273da8eaafaaef80806247c32cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.292466 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.308296 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.320138 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.336752 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-75j89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8cf2b0e-bd92-47c4-b5d1-1fa7a36ff54e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11496451325ed6beb43774b76bda29972b9a533ebdd4dfc9776d9847f0f11ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d7b74d11142a6cb9c81a4c54191306afc4a636d287157042ab5e8b66188ca3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6919b982a888b5ef7e30268e909cfcdf7ae25cf9dde5281cd412967d1b3f0eae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d3342557256a40998df48b74ac906011257ebbc248d44b57efefd4815611b38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e0cbf6b041e099f3fe3cd8d708f624e33da53cc218b2036a7a2427a20c8510\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8b57391886e6d467448209be02089ae96b0e0e7210976049e93512acb460bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac722cc49858a61ef449ee84108995050d8281a8c03cc50fede793a1475bc0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:40:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m46p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-75j89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.337585 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.337717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.337808 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.337884 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.337949 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.350390 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xh5mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6722e8df-a635-4808-b6b9-d5633fc3d34b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:41:14Z\\\",\\\"message\\\":\\\"2026-01-30T21:40:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8\\\\n2026-01-30T21:40:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4e60751-f95e-4c85-81c1-48a5e4c959d8 to /host/opt/cni/bin/\\\\n2026-01-30T21:40:28Z [verbose] multus-daemon started\\\\n2026-01-30T21:40:28Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:41:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8gr57\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xh5mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.365327 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.384637 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.402481 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:35Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.440895 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.440940 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.440952 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.440972 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.440984 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.544002 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.544061 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.544072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.544090 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.544101 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.649010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.649143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.649174 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.649212 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.649254 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.752691 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.752749 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.752763 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.752785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.752802 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.856458 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.856566 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.856589 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.856625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.856650 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.960471 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.960539 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.960560 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.960586 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:35 crc kubenswrapper[4979]: I0130 21:41:35.960604 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:35Z","lastTransitionTime":"2026-01-30T21:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.064572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.064648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.064664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.064694 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.064712 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.068928 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:36 crc kubenswrapper[4979]: E0130 21:41:36.069229 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.074083 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:40:44.591997494 +0000 UTC Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.167580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.167713 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.167733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.167761 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.167779 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.271293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.271378 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.271396 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.271425 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.271446 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.374007 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.374072 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.374084 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.374105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.374120 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.477528 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.477605 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.477629 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.477664 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.477693 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.580803 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.580851 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.580866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.580885 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.580896 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.684170 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.684246 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.684264 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.684293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.684312 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.787484 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.787553 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.787572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.787599 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.787619 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.890553 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.890615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.890632 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.890656 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.890674 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.994211 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.994276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.994294 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.994322 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:36 crc kubenswrapper[4979]: I0130 21:41:36.994340 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:36Z","lastTransitionTime":"2026-01-30T21:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.069815 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.069854 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:37 crc kubenswrapper[4979]: E0130 21:41:37.069977 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.069999 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:37 crc kubenswrapper[4979]: E0130 21:41:37.070165 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:37 crc kubenswrapper[4979]: E0130 21:41:37.070360 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.071060 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:41:37 crc kubenswrapper[4979]: E0130 21:41:37.071216 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.074384 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:19:02.473502963 +0000 UTC Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.100802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.100863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.100878 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.100903 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.100918 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.204790 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.204860 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.204874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.204897 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.204911 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.307388 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.307456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.307477 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.307503 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.307521 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.411384 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.411495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.411517 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.411545 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.411571 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.515402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.515467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.515487 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.515513 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.515532 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.619078 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.619140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.619159 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.619184 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.619204 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.722638 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.722718 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.722747 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.722778 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.722804 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.826456 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.826545 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.826568 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.826602 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.826622 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.929293 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.929356 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.929381 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.929417 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:37 crc kubenswrapper[4979]: I0130 21:41:37.929441 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:37Z","lastTransitionTime":"2026-01-30T21:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.032753 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.032820 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.032841 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.032906 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.032927 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.069792 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:38 crc kubenswrapper[4979]: E0130 21:41:38.069982 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.074942 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:07:19.724773686 +0000 UTC Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.137709 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.137766 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.137778 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.137796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.137810 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.240874 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.240938 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.240951 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.241020 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.241189 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.343990 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.344106 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.344132 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.344165 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.344187 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.447870 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.447942 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.447959 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.447993 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.448018 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.551705 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.551785 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.551802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.551836 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.551854 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.654402 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.654470 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.654495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.654523 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.654545 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.758675 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.758775 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.758804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.758843 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.758871 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.863448 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.863499 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.863512 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.863532 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.863545 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.966651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.966697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.966706 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.966722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:38 crc kubenswrapper[4979]: I0130 21:41:38.966735 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:38Z","lastTransitionTime":"2026-01-30T21:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.068805 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.068815 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.069015 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:39 crc kubenswrapper[4979]: E0130 21:41:39.069179 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:39 crc kubenswrapper[4979]: E0130 21:41:39.069269 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:39 crc kubenswrapper[4979]: E0130 21:41:39.069371 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.070900 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.070949 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.070964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.070986 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.071000 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.075903 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:15:36.806236322 +0000 UTC Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.174961 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.174998 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.175010 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.175045 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.175060 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.278899 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.278958 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.278969 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.278991 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.279004 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.383148 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.383202 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.383215 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.383236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.383249 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.487158 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.487255 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.487269 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.487317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.487368 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.590375 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.590447 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.590465 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.590496 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.590517 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.693154 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.693234 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.693252 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.693280 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.693301 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.797161 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.797233 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.797244 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.797261 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.797272 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.902625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.902726 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.902751 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.902790 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:39 crc kubenswrapper[4979]: I0130 21:41:39.902819 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:39Z","lastTransitionTime":"2026-01-30T21:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.007713 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.007798 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.007821 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.007853 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.007874 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.068906 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:40 crc kubenswrapper[4979]: E0130 21:41:40.069179 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.076582 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:00:39.642528673 +0000 UTC Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.111670 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.111758 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.111777 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.111808 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.111873 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.216480 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.216541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.216555 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.216583 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.216601 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.320015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.320146 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.320172 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.320208 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.320233 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.423791 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.423865 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.423891 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.423924 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.423947 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.527789 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.527850 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.527867 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.527899 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.527915 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.632543 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.632604 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.632618 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.632642 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.632657 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.735844 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.735908 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.735919 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.735939 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.735954 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.839651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.839729 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.839750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.839780 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.839799 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.943334 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.943731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.943900 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.944096 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:40 crc kubenswrapper[4979]: I0130 21:41:40.944258 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:40Z","lastTransitionTime":"2026-01-30T21:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.048698 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.049175 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.049342 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.049502 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.049634 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.069207 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.069207 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.069591 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:41 crc kubenswrapper[4979]: E0130 21:41:41.069932 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:41 crc kubenswrapper[4979]: E0130 21:41:41.070099 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:41 crc kubenswrapper[4979]: E0130 21:41:41.070298 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.077242 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:19:57.965638921 +0000 UTC Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.152541 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.152580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.152590 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.152605 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.152616 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.256323 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.256398 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.256427 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.256462 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.256488 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.359454 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.359553 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.359574 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.359606 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.359627 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.463309 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.463515 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.463580 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.463615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.463633 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.567140 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.567663 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.567836 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.567988 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.568207 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.672160 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.672230 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.672250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.672274 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.672299 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.775798 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.775857 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.775872 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.775894 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.775911 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.880187 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.880271 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.880288 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.880317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.880341 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.984139 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.984183 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.984198 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.984232 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:41 crc kubenswrapper[4979]: I0130 21:41:41.984248 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:41Z","lastTransitionTime":"2026-01-30T21:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.068746 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:42 crc kubenswrapper[4979]: E0130 21:41:42.068996 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.078123 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:34:59.454444404 +0000 UTC Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.087762 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.087846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.087863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.087883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.087897 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.190506 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.190565 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.190576 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.190596 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.190608 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.294118 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.294153 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.294163 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.294178 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.294190 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.396401 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.396458 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.396472 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.396490 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.396501 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.500329 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.500404 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.500429 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.500461 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.500486 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.603641 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.603718 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.603735 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.603764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.603784 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.706519 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.706558 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.706576 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.706593 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.706604 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.809743 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.809831 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.809857 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.809892 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.809916 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.913665 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.913749 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.913771 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.913799 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:42 crc kubenswrapper[4979]: I0130 21:41:42.913818 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:42Z","lastTransitionTime":"2026-01-30T21:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.019382 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.019465 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.019492 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.019527 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.019552 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.069417 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.069578 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.069670 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.069743 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.069734 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.069913 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.078978 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:15:24.359424155 +0000 UTC Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.086594 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.123373 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.123413 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.123422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.123438 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.123450 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.225700 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.225743 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.225752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.225767 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.225776 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.328740 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.328788 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.328800 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.328815 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.328825 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.432604 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.432679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.432701 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.432731 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.432752 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.535468 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.535628 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.535658 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.535690 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.535711 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.639272 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.639334 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.639353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.639382 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.639398 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.743905 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.745357 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.745385 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.745414 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.745434 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.848685 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.848749 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.848772 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.848802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.848824 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.906511 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.906567 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.906587 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.906612 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.906630 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.928767 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.934000 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.934116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.934141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.934165 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.934182 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.955774 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.961786 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.961853 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.961872 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.961901 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.961924 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:43 crc kubenswrapper[4979]: E0130 21:41:43.983881 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.989221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.989257 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.989269 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.989286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:43 crc kubenswrapper[4979]: I0130 21:41:43.989299 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:43Z","lastTransitionTime":"2026-01-30T21:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.011619 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.017308 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.017368 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.017397 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.017429 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.017455 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.040772 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:41:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f159bd6e-a7e4-4439-9f37-0bcc8094103f\\\",\\\"systemUUID\\\":\\\"e7905fc5-1d22-4ae8-ba0f-c56ed758748c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:44Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.040997 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.043130 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.043191 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.043217 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.043250 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.043275 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.069106 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.069465 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.079227 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:36:01.442544161 +0000 UTC Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.146210 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.146291 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.146328 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.146362 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.146385 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.250290 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.250444 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.250474 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.250835 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.250861 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.354126 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.354192 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.354210 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.354238 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.354262 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.457538 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.457679 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.457699 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.457723 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.457739 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.561379 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.561450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.561473 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.561504 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.561525 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.664120 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.664195 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.664223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.664254 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.664275 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.769946 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.770017 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.770059 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.770087 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.770118 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.773690 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.774219 4979 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:41:44 crc kubenswrapper[4979]: E0130 21:41:44.774820 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs podName:d0632938-c88a-4c22-b0e7-8f7473532f07 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:48.774757611 +0000 UTC m=+164.736004684 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs") pod "network-metrics-daemon-pk47q" (UID: "d0632938-c88a-4c22-b0e7-8f7473532f07") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.873752 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.873846 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.873867 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.873896 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.873914 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.977341 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.977410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.977428 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.977455 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:44 crc kubenswrapper[4979]: I0130 21:41:44.977473 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:44Z","lastTransitionTime":"2026-01-30T21:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.069746 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.069746 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.069889 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:45 crc kubenswrapper[4979]: E0130 21:41:45.070158 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:45 crc kubenswrapper[4979]: E0130 21:41:45.070739 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:45 crc kubenswrapper[4979]: E0130 21:41:45.071132 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079468 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:02:14.323600767 +0000 UTC Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079625 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079698 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079722 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079753 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.079779 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.092158 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://461c18b806099b1b2d88c6584fc8d0462d838881c01ffaf361752538cf282fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.111743 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.126640 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p8nz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01c7f257-42d4-4934-805e-7f5d80988fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0e1c19c9f9d5297c4ea5e112d3f96177077c27e1c99594fcbf38503b81d5aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p8nz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.143340 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f2xld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65d4cf3f-dc90-408a-9652-740d7472fb39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da984b5ec3410e61aec72b865786b955c3cc8b933ee15c8ba9e67f8d3ecc42ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f2xld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.162383 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770b6823-96cd-42f3-bc4b-7ede1f7a0643\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95258a16e5327cdca962a7907373ab304e6fe0b411fcea79f17df8b64d6b898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58672d102d2f25359f43305785d4c5b1eddc80f48dd4e91233bce419bc97c337\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58846677ffecde001bef35cf6e92a3c3883ec2cfb74438c6ae053d6c39e7ddcc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.183945 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28767351-ec5c-4f9e-8b01-2954eaf4ea30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://602069a8d6d92007a23dd44c5f42438665534ab18ffd89f7645132c45b2ec44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zdwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kqsqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:41:45Z is after 2025-08-24T17:21:41Z" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.186615 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.186710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.186730 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.186811 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.186897 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.226156 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-75j89" podStartSLOduration=80.226137419 podStartE2EDuration="1m20.226137419s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.225651765 +0000 UTC m=+101.186898818" watchObservedRunningTime="2026-01-30 21:41:45.226137419 +0000 UTC m=+101.187384452" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.246500 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xh5mg" podStartSLOduration=80.246472958 podStartE2EDuration="1m20.246472958s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.246334185 +0000 UTC m=+101.207581248" watchObservedRunningTime="2026-01-30 21:41:45.246472958 +0000 UTC m=+101.207719991" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.327179 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.327236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.327253 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.327276 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.327289 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.392376 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xz6s9" podStartSLOduration=80.392351958 podStartE2EDuration="1m20.392351958s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.376391108 +0000 UTC m=+101.337638141" watchObservedRunningTime="2026-01-30 21:41:45.392351958 +0000 UTC m=+101.353598991" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.414190 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.414149199 podStartE2EDuration="27.414149199s" podCreationTimestamp="2026-01-30 21:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.413601813 +0000 UTC m=+101.374848846" watchObservedRunningTime="2026-01-30 21:41:45.414149199 +0000 UTC m=+101.375396252" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.430019 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.430083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.430096 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.430120 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.430134 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.489953 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.489929247 podStartE2EDuration="46.489929247s" podCreationTimestamp="2026-01-30 21:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.489284588 +0000 UTC m=+101.450531621" watchObservedRunningTime="2026-01-30 21:41:45.489929247 +0000 UTC m=+101.451176280" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.517608 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.5175817289999998 podStartE2EDuration="2.517581729s" podCreationTimestamp="2026-01-30 21:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.511859961 +0000 UTC m=+101.473107004" watchObservedRunningTime="2026-01-30 21:41:45.517581729 +0000 UTC m=+101.478828762" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.532575 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.532638 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.532653 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.532678 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.532692 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.536523 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.53650411 podStartE2EDuration="1m20.53650411s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:45.536201552 +0000 UTC m=+101.497448585" watchObservedRunningTime="2026-01-30 21:41:45.53650411 +0000 UTC m=+101.497751143" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.635524 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.635583 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.635601 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.635622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.635639 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.738257 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.738315 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.738333 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.738360 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.738377 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.842266 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.842348 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.842373 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.842408 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.842431 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.946225 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.946308 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.946332 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.946371 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:45 crc kubenswrapper[4979]: I0130 21:41:45.946400 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:45Z","lastTransitionTime":"2026-01-30T21:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.049507 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.049691 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.049720 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.049750 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.049769 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.072266 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:46 crc kubenswrapper[4979]: E0130 21:41:46.073536 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.079835 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:41:02.303666833 +0000 UTC Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.153239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.153362 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.153382 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.153411 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.153430 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.257343 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.257453 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.257471 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.257498 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.257518 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.359743 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.359884 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.359916 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.359964 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.359995 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.463710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.463774 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.463793 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.463826 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.463847 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.567495 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.567563 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.567582 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.567612 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.567634 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.670993 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.671165 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.671188 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.671223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.671243 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.773948 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.773984 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.773997 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.774015 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.774026 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.876796 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.876850 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.876863 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.876881 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.876895 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.979808 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.979847 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.979858 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.979877 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:46 crc kubenswrapper[4979]: I0130 21:41:46.979886 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:46Z","lastTransitionTime":"2026-01-30T21:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.069536 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.069560 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.069785 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:47 crc kubenswrapper[4979]: E0130 21:41:47.069904 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:47 crc kubenswrapper[4979]: E0130 21:41:47.070009 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:47 crc kubenswrapper[4979]: E0130 21:41:47.070178 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.080022 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:21:12.610387454 +0000 UTC Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.081401 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.081455 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.081467 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.081485 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.081497 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.184092 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.184130 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.184143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.184161 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.184171 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.287059 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.287099 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.287109 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.287125 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.287135 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.395163 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.395221 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.395236 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.395260 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.395279 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.499320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.499378 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.499394 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.499417 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.499431 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.602886 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.602928 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.602938 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.602955 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.602967 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.705267 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.705337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.705348 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.705363 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.705374 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.808989 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.809059 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.809070 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.809090 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.809103 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.912223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.912302 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.912319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.912341 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:47 crc kubenswrapper[4979]: I0130 21:41:47.912357 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:47Z","lastTransitionTime":"2026-01-30T21:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.015372 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.015446 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.015469 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.015498 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.015517 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.069501 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:48 crc kubenswrapper[4979]: E0130 21:41:48.069779 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.080620 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:38:11.027743363 +0000 UTC Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.118275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.118310 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.118320 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.118336 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.118345 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.221071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.221118 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.221128 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.221143 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.221153 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.323395 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.323424 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.323433 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.323450 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.323462 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.426685 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.426744 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.426759 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.426783 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.426795 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.529710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.529783 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.529807 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.529839 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.529858 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.633142 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.633239 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.633266 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.633308 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.633411 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.736903 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.737006 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.737085 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.737125 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.737150 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.839973 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.840021 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.840050 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.840068 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.840079 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.943540 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.943584 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.943593 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.943611 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:48 crc kubenswrapper[4979]: I0130 21:41:48.943625 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:48Z","lastTransitionTime":"2026-01-30T21:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.046999 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.047097 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.047119 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.047144 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.047163 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.069523 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.069521 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:49 crc kubenswrapper[4979]: E0130 21:41:49.069825 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.069557 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:49 crc kubenswrapper[4979]: E0130 21:41:49.069938 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:49 crc kubenswrapper[4979]: E0130 21:41:49.070154 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.080984 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:21:30.442129193 +0000 UTC Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.149873 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.149926 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.149940 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.149962 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.149975 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.253620 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.253675 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.253687 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.253710 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.253722 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.356988 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.357105 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.357126 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.357155 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.357177 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.459613 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.459660 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.459667 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.459686 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.459696 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.563509 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.563607 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.563635 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.563665 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.563686 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.666422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.666487 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.666500 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.666522 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.666541 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.769455 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.769561 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.769591 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.769644 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.769666 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.872286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.872327 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.872338 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.872352 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.872362 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.975278 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.975353 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.975422 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.975453 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:49 crc kubenswrapper[4979]: I0130 21:41:49.975470 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:49Z","lastTransitionTime":"2026-01-30T21:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.069703 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:50 crc kubenswrapper[4979]: E0130 21:41:50.069941 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.071268 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:41:50 crc kubenswrapper[4979]: E0130 21:41:50.071669 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.079596 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.079663 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.079688 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.079717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.079741 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.081858 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:49:31.922886446 +0000 UTC Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.182635 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.182684 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.182697 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.182717 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.182732 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.286358 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.286476 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.286545 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.286573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.286592 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.390202 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.390268 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.390286 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.390315 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.390337 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.493121 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.493162 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.493173 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.493190 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.493201 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.596830 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.596886 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.596898 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.596919 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.596931 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.699760 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.699866 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.699883 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.699907 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.699924 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.803518 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.803572 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.803630 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.803648 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.803660 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.907504 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.907579 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.907592 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.907616 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:50 crc kubenswrapper[4979]: I0130 21:41:50.907632 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:50Z","lastTransitionTime":"2026-01-30T21:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.010720 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.010763 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.010772 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.010787 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.010797 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.068870 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.068970 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:51 crc kubenswrapper[4979]: E0130 21:41:51.069062 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:51 crc kubenswrapper[4979]: E0130 21:41:51.069153 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.069264 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:51 crc kubenswrapper[4979]: E0130 21:41:51.069331 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.082709 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:18:40.112510908 +0000 UTC Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.114903 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.114966 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.114981 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.115000 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.115012 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.217689 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.217725 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.217733 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.217748 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.217761 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.321152 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.321218 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.321237 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.321262 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.321306 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.423938 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.424013 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.424060 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.424091 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.424113 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.528171 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.528272 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.528284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.528309 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.528325 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.632184 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.632729 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.632742 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.632764 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.632777 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.735319 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.735380 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.735393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.735413 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.735427 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.839043 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.839104 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.839116 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.839141 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.839157 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.942675 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.942760 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.942787 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.942827 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:51 crc kubenswrapper[4979]: I0130 21:41:51.942854 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:51Z","lastTransitionTime":"2026-01-30T21:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.046337 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.046407 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.046424 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.046460 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.046482 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.069171 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:52 crc kubenswrapper[4979]: E0130 21:41:52.069406 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.083329 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:03:42.092976458 +0000 UTC Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.149683 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.149776 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.149802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.149839 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.149867 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.253954 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.254069 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.254095 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.254124 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.254146 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.357077 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.357146 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.357201 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.357227 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.357245 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.460193 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.460277 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.460295 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.460330 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.460351 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.563726 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.563784 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.563802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.563825 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.563846 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.667549 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.667622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.667633 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.667651 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.667665 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.771812 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.771878 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.771895 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.771921 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.771940 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.874670 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.875228 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.875398 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.875573 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.875720 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.979008 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.979073 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.979083 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.979099 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:52 crc kubenswrapper[4979]: I0130 21:41:52.979111 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:52Z","lastTransitionTime":"2026-01-30T21:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.069136 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.069162 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.069384 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:53 crc kubenswrapper[4979]: E0130 21:41:53.069618 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:53 crc kubenswrapper[4979]: E0130 21:41:53.069756 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:53 crc kubenswrapper[4979]: E0130 21:41:53.069850 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.081562 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.081610 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.081622 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.081645 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.081659 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.083759 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:07:42.489673239 +0000 UTC Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.185223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.185284 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.185296 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.185317 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.185331 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.288746 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.289223 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.289244 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.289261 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.289274 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.392432 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.392527 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.392549 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.392577 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.392598 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.496198 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.496245 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.496256 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.496275 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.496287 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.599336 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.599393 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.599409 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.599431 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.599442 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.702527 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.702603 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.702628 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.702666 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.702690 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.806137 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.806194 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.806204 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.806224 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.806235 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.909743 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.909790 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.909804 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.909822 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:53 crc kubenswrapper[4979]: I0130 21:41:53.909833 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:53Z","lastTransitionTime":"2026-01-30T21:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.013410 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.013457 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.013470 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.013490 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.013499 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:54Z","lastTransitionTime":"2026-01-30T21:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.069437 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:54 crc kubenswrapper[4979]: E0130 21:41:54.069671 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.084892 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:43:31.027791473 +0000 UTC Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.103701 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.103768 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.103781 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.103802 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.103822 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:54Z","lastTransitionTime":"2026-01-30T21:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.127071 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.127123 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.127136 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.127155 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.127169 4979 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:41:54Z","lastTransitionTime":"2026-01-30T21:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.160275 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849"] Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.160807 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.166387 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.166521 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.166692 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.166903 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.220357 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p8nz9" podStartSLOduration=90.220314559 podStartE2EDuration="1m30.220314559s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:54.216062472 +0000 UTC m=+110.177309515" watchObservedRunningTime="2026-01-30 21:41:54.220314559 +0000 UTC m=+110.181561602" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.245392 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-f2xld" podStartSLOduration=90.245303868 podStartE2EDuration="1m30.245303868s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:54.245198485 +0000 UTC m=+110.206445528" watchObservedRunningTime="2026-01-30 21:41:54.245303868 +0000 UTC m=+110.206550941" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.282007 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.281791834 podStartE2EDuration="1m28.281791834s" podCreationTimestamp="2026-01-30 21:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:54.265826524 +0000 UTC m=+110.227073567" watchObservedRunningTime="2026-01-30 21:41:54.281791834 +0000 UTC m=+110.243038877" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.290332 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.290401 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.290434 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.290628 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.290702 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392601 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392681 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392753 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392793 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392826 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.392937 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.393338 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.395253 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.403231 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.413580 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mw849\" (UID: \"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.487695 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" Jan 30 21:41:54 crc kubenswrapper[4979]: W0130 21:41:54.511121 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e08d4ed_5213_4b6a_bd78_92e91b0ba9fb.slice/crio-ce1c874701a3dcf7c080a84c78ecde3eb9e87d1fc593081b7bcda7a7e5b66b98 WatchSource:0}: Error finding container ce1c874701a3dcf7c080a84c78ecde3eb9e87d1fc593081b7bcda7a7e5b66b98: Status 404 returned error can't find the container with id ce1c874701a3dcf7c080a84c78ecde3eb9e87d1fc593081b7bcda7a7e5b66b98 Jan 30 21:41:54 crc kubenswrapper[4979]: I0130 21:41:54.713326 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" event={"ID":"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb","Type":"ContainerStarted","Data":"ce1c874701a3dcf7c080a84c78ecde3eb9e87d1fc593081b7bcda7a7e5b66b98"} Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.069170 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:55 crc kubenswrapper[4979]: E0130 21:41:55.070572 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.070624 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:55 crc kubenswrapper[4979]: E0130 21:41:55.070691 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.070718 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:55 crc kubenswrapper[4979]: E0130 21:41:55.071055 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.086019 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:48:06.741126457 +0000 UTC Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.086158 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.095508 4979 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.717918 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" event={"ID":"2e08d4ed-5213-4b6a-bd78-92e91b0ba9fb","Type":"ContainerStarted","Data":"a945d26a981ee5d17f006cb8154b7d0921bb51266655d9527d1bc642d04c0f4f"} Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.733083 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podStartSLOduration=91.733062321 podStartE2EDuration="1m31.733062321s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:54.28417707 +0000 UTC m=+110.245424153" watchObservedRunningTime="2026-01-30 21:41:55.733062321 +0000 UTC m=+111.694309354" Jan 30 21:41:55 crc kubenswrapper[4979]: I0130 21:41:55.733995 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mw849" podStartSLOduration=90.733988436 podStartE2EDuration="1m30.733988436s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:55.732712831 +0000 UTC m=+111.693959874" watchObservedRunningTime="2026-01-30 21:41:55.733988436 +0000 UTC m=+111.695235459" Jan 30 21:41:56 crc kubenswrapper[4979]: I0130 21:41:56.069328 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:56 crc kubenswrapper[4979]: E0130 21:41:56.069516 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:57 crc kubenswrapper[4979]: I0130 21:41:57.069187 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:57 crc kubenswrapper[4979]: I0130 21:41:57.069261 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:57 crc kubenswrapper[4979]: I0130 21:41:57.069478 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:57 crc kubenswrapper[4979]: E0130 21:41:57.069880 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:57 crc kubenswrapper[4979]: E0130 21:41:57.070107 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:57 crc kubenswrapper[4979]: E0130 21:41:57.070203 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:41:58 crc kubenswrapper[4979]: I0130 21:41:58.068977 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:41:58 crc kubenswrapper[4979]: E0130 21:41:58.069175 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:41:59 crc kubenswrapper[4979]: I0130 21:41:59.069217 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:41:59 crc kubenswrapper[4979]: I0130 21:41:59.069289 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:41:59 crc kubenswrapper[4979]: E0130 21:41:59.069392 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:41:59 crc kubenswrapper[4979]: I0130 21:41:59.069486 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:41:59 crc kubenswrapper[4979]: E0130 21:41:59.069542 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:41:59 crc kubenswrapper[4979]: E0130 21:41:59.069738 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.069224 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:00 crc kubenswrapper[4979]: E0130 21:42:00.069793 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.735330 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/1.log" Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.736154 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/0.log" Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.736253 4979 generic.go:334] "Generic (PLEG): container finished" podID="6722e8df-a635-4808-b6b9-d5633fc3d34b" containerID="94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5" exitCode=1 Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.736326 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerDied","Data":"94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5"} Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.736379 4979 scope.go:117] "RemoveContainer" containerID="553daa1913819c48ae82bafbf2ed3bbde56426ed93f4f6d2271bb9e9ba3148c7" Jan 30 21:42:00 crc kubenswrapper[4979]: I0130 21:42:00.738021 4979 scope.go:117] "RemoveContainer" containerID="94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5" Jan 30 21:42:00 crc kubenswrapper[4979]: E0130 21:42:00.738453 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-xh5mg_openshift-multus(6722e8df-a635-4808-b6b9-d5633fc3d34b)\"" pod="openshift-multus/multus-xh5mg" podUID="6722e8df-a635-4808-b6b9-d5633fc3d34b" Jan 30 21:42:01 crc kubenswrapper[4979]: I0130 21:42:01.068965 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:01 crc kubenswrapper[4979]: I0130 21:42:01.069002 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:01 crc kubenswrapper[4979]: E0130 21:42:01.069201 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:01 crc kubenswrapper[4979]: E0130 21:42:01.069320 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:01 crc kubenswrapper[4979]: I0130 21:42:01.069142 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:01 crc kubenswrapper[4979]: E0130 21:42:01.069469 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:01 crc kubenswrapper[4979]: I0130 21:42:01.741629 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/1.log" Jan 30 21:42:02 crc kubenswrapper[4979]: I0130 21:42:02.068925 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:02 crc kubenswrapper[4979]: E0130 21:42:02.069377 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:02 crc kubenswrapper[4979]: I0130 21:42:02.069563 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:42:02 crc kubenswrapper[4979]: E0130 21:42:02.069752 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jttsv_openshift-ovn-kubernetes(34ce4851-1ecc-47da-89ca-09894eb0908a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" Jan 30 21:42:03 crc kubenswrapper[4979]: I0130 21:42:03.069115 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:03 crc kubenswrapper[4979]: E0130 21:42:03.069326 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:03 crc kubenswrapper[4979]: I0130 21:42:03.069447 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:03 crc kubenswrapper[4979]: I0130 21:42:03.069703 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:03 crc kubenswrapper[4979]: E0130 21:42:03.069863 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:03 crc kubenswrapper[4979]: E0130 21:42:03.070156 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:04 crc kubenswrapper[4979]: I0130 21:42:04.068735 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:04 crc kubenswrapper[4979]: E0130 21:42:04.068964 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:05 crc kubenswrapper[4979]: I0130 21:42:05.069205 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:05 crc kubenswrapper[4979]: I0130 21:42:05.069211 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:05 crc kubenswrapper[4979]: I0130 21:42:05.070531 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:05 crc kubenswrapper[4979]: E0130 21:42:05.070528 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:05 crc kubenswrapper[4979]: E0130 21:42:05.070634 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:05 crc kubenswrapper[4979]: E0130 21:42:05.070710 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:05 crc kubenswrapper[4979]: E0130 21:42:05.105718 4979 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 21:42:05 crc kubenswrapper[4979]: E0130 21:42:05.184824 4979 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:42:06 crc kubenswrapper[4979]: I0130 21:42:06.069680 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:06 crc kubenswrapper[4979]: E0130 21:42:06.070004 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:07 crc kubenswrapper[4979]: I0130 21:42:07.069513 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:07 crc kubenswrapper[4979]: I0130 21:42:07.069513 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:07 crc kubenswrapper[4979]: E0130 21:42:07.069755 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:07 crc kubenswrapper[4979]: I0130 21:42:07.069566 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:07 crc kubenswrapper[4979]: E0130 21:42:07.069916 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:07 crc kubenswrapper[4979]: E0130 21:42:07.070179 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:08 crc kubenswrapper[4979]: I0130 21:42:08.068682 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:08 crc kubenswrapper[4979]: E0130 21:42:08.068895 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:09 crc kubenswrapper[4979]: I0130 21:42:09.069425 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:09 crc kubenswrapper[4979]: I0130 21:42:09.069541 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:09 crc kubenswrapper[4979]: E0130 21:42:09.069635 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:09 crc kubenswrapper[4979]: E0130 21:42:09.069939 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:09 crc kubenswrapper[4979]: I0130 21:42:09.069749 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:09 crc kubenswrapper[4979]: E0130 21:42:09.070159 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:10 crc kubenswrapper[4979]: I0130 21:42:10.069563 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:10 crc kubenswrapper[4979]: E0130 21:42:10.069735 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:10 crc kubenswrapper[4979]: E0130 21:42:10.186678 4979 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:42:11 crc kubenswrapper[4979]: I0130 21:42:11.069327 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:11 crc kubenswrapper[4979]: I0130 21:42:11.069327 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:11 crc kubenswrapper[4979]: E0130 21:42:11.069510 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:11 crc kubenswrapper[4979]: E0130 21:42:11.069545 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:11 crc kubenswrapper[4979]: I0130 21:42:11.069335 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:11 crc kubenswrapper[4979]: E0130 21:42:11.069641 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:12 crc kubenswrapper[4979]: I0130 21:42:12.069292 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:12 crc kubenswrapper[4979]: I0130 21:42:12.069834 4979 scope.go:117] "RemoveContainer" containerID="94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5" Jan 30 21:42:12 crc kubenswrapper[4979]: E0130 21:42:12.069768 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:12 crc kubenswrapper[4979]: I0130 21:42:12.788922 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/1.log" Jan 30 21:42:12 crc kubenswrapper[4979]: I0130 21:42:12.789071 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerStarted","Data":"63eeeb7e581e8ce3888839e2e83b0b7c4eb60c14ab5554f1fd5b47b9651c9ea0"} Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.069219 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.069414 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.069711 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:13 crc kubenswrapper[4979]: E0130 21:42:13.069844 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:13 crc kubenswrapper[4979]: E0130 21:42:13.069947 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:13 crc kubenswrapper[4979]: E0130 21:42:13.069992 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.070353 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.795757 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/3.log" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.798727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerStarted","Data":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.799293 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:42:13 crc kubenswrapper[4979]: I0130 21:42:13.835261 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podStartSLOduration=108.835238155 podStartE2EDuration="1m48.835238155s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:13.834010602 +0000 UTC m=+129.795257635" watchObservedRunningTime="2026-01-30 21:42:13.835238155 +0000 UTC m=+129.796485188" Jan 30 21:42:14 crc kubenswrapper[4979]: I0130 21:42:14.068918 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:14 crc kubenswrapper[4979]: E0130 21:42:14.069117 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:14 crc kubenswrapper[4979]: I0130 21:42:14.313880 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pk47q"] Jan 30 21:42:14 crc kubenswrapper[4979]: I0130 21:42:14.802485 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:14 crc kubenswrapper[4979]: E0130 21:42:14.802634 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:15 crc kubenswrapper[4979]: I0130 21:42:15.069644 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:15 crc kubenswrapper[4979]: I0130 21:42:15.069672 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:15 crc kubenswrapper[4979]: E0130 21:42:15.070906 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:15 crc kubenswrapper[4979]: I0130 21:42:15.070948 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:15 crc kubenswrapper[4979]: E0130 21:42:15.071142 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:15 crc kubenswrapper[4979]: E0130 21:42:15.071185 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:15 crc kubenswrapper[4979]: E0130 21:42:15.187762 4979 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:42:16 crc kubenswrapper[4979]: I0130 21:42:16.069293 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:16 crc kubenswrapper[4979]: E0130 21:42:16.069628 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:17 crc kubenswrapper[4979]: I0130 21:42:17.069437 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:17 crc kubenswrapper[4979]: I0130 21:42:17.069502 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:17 crc kubenswrapper[4979]: I0130 21:42:17.069631 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:17 crc kubenswrapper[4979]: E0130 21:42:17.069772 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:17 crc kubenswrapper[4979]: E0130 21:42:17.069930 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:17 crc kubenswrapper[4979]: E0130 21:42:17.070060 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:18 crc kubenswrapper[4979]: I0130 21:42:18.069284 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:18 crc kubenswrapper[4979]: E0130 21:42:18.070025 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:19 crc kubenswrapper[4979]: I0130 21:42:19.069639 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:19 crc kubenswrapper[4979]: I0130 21:42:19.069707 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:19 crc kubenswrapper[4979]: I0130 21:42:19.069663 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:19 crc kubenswrapper[4979]: E0130 21:42:19.069977 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:42:19 crc kubenswrapper[4979]: E0130 21:42:19.070194 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:42:19 crc kubenswrapper[4979]: E0130 21:42:19.070327 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:42:20 crc kubenswrapper[4979]: I0130 21:42:20.069221 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:20 crc kubenswrapper[4979]: E0130 21:42:20.069413 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pk47q" podUID="d0632938-c88a-4c22-b0e7-8f7473532f07" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.070287 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.070435 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.070925 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.073464 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.073684 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.073805 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 21:42:21 crc kubenswrapper[4979]: I0130 21:42:21.074953 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 21:42:22 crc kubenswrapper[4979]: I0130 21:42:22.069618 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:22 crc kubenswrapper[4979]: I0130 21:42:22.071975 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 21:42:22 crc kubenswrapper[4979]: I0130 21:42:22.072022 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.655999 4979 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.702134 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tdvvn"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.702984 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.703385 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.703437 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.703499 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.704436 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.704556 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mr5l2"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.705331 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.707929 4979 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.708000 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.708093 4979 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.708113 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.712114 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.716095 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.716724 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hwb2t"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.717904 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.733335 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.737170 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.738168 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l44fm"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.738636 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.739260 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.744804 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.745554 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.748877 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.749308 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.749526 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.749654 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.749769 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.749950 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.750121 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.750502 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.750860 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.750959 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.751071 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.750863 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffscn"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.751248 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.751359 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.751930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.752819 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.753778 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.753962 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.758462 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.758525 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.758575 4979 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: secrets "console-operator-dockercfg-4xjcr" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.760198 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-operator-dockercfg-4xjcr\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.763314 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.763657 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.767525 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.768070 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.768899 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.769437 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.769682 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.769838 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.770020 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.770341 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.774664 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783513 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783543 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tdvvn"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.778634 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7616472e-472c-4dfa-bf69-97d784e1e42f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783625 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp6s\" (UniqueName: \"kubernetes.io/projected/7616472e-472c-4dfa-bf69-97d784e1e42f-kube-api-access-2cp6s\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783648 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt57s\" (UniqueName: \"kubernetes.io/projected/c38d45aa-0713-4059-8c2d-59a9b1cb5861-kube-api-access-vt57s\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783681 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-encryption-config\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783700 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783723 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-node-pullsecrets\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783741 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit-dir\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783766 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783784 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghmbg\" (UniqueName: \"kubernetes.io/projected/9e86ea88-60d1-4af7-8095-5ee44e176029-kube-api-access-ghmbg\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783819 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqrz\" (UniqueName: \"kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783837 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2vgz\" (UniqueName: \"kubernetes.io/projected/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-kube-api-access-v2vgz\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783857 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b609710f-4a90-417e-9e31-b1a045c1e8a2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783881 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-serving-cert\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b609710f-4a90-417e-9e31-b1a045c1e8a2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783926 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-client\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783950 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-dir\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783966 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.783984 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-auth-proxy-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784008 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwf2j\" (UniqueName: \"kubernetes.io/projected/b609710f-4a90-417e-9e31-b1a045c1e8a2-kube-api-access-dwf2j\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784052 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784081 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-serving-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784132 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-policies\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784157 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784181 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-config\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784200 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784223 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-image-import-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784240 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-encryption-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784260 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-images\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784284 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784303 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784318 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7v7\" (UniqueName: \"kubernetes.io/projected/21b53e08-d25e-41ab-a180-4b852eb77c8c-kube-api-access-4b7v7\") pod \"downloads-7954f5f757-hwb2t\" (UID: \"21b53e08-d25e-41ab-a180-4b852eb77c8c\") " pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784333 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-serving-cert\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784354 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5fj\" (UniqueName: \"kubernetes.io/projected/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-kube-api-access-lk5fj\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784376 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784390 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-client\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784407 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-trusted-ca\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784420 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784433 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784449 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c38d45aa-0713-4059-8c2d-59a9b1cb5861-machine-approver-tls\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.777104 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.784918 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.778064 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.777293 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.785953 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.786336 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.786617 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.786797 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.786899 4979 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: configmaps "console-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.786950 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.787259 4979 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.787397 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.787650 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.787740 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.787805 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.787921 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.788024 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.788251 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.778149 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.788121 4979 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: configmaps "oauth-serving-cert" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.788685 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"oauth-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.778264 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.788906 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.779233 4979 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.789209 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.789225 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.779892 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.789498 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.789648 4979 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.789702 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.789140 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.778214 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.789951 4979 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: secrets "config-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.789982 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"config-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.789397 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.790173 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.790263 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.790185 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.789443 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 21:42:24 crc kubenswrapper[4979]: W0130 21:42:24.789870 4979 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: secrets "openshift-config-operator-dockercfg-7pc5z" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 30 21:42:24 crc kubenswrapper[4979]: E0130 21:42:24.790443 4979 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-config-operator-dockercfg-7pc5z\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.790498 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.795414 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.796246 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mr5l2"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.805798 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.806248 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.806453 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.806643 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.807710 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ww6sg"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.808371 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.822735 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.839951 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.840457 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.840961 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.849045 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.849161 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.853793 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.855206 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.855597 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.855859 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.856472 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.856861 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857017 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857198 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857388 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857501 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857648 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.857770 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.858010 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.865282 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.865507 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.865853 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.870432 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.870945 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.871194 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.871200 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.871381 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.872081 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.872098 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.885003 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886011 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk5fj\" (UniqueName: \"kubernetes.io/projected/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-kube-api-access-lk5fj\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886084 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sztff\" (UniqueName: \"kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886123 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/814afa6a-716d-4011-89f9-6ccbc336e361-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886164 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886198 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/26cfd7ef-1024-479e-bdc5-e39429a16ee5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886237 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886283 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-client\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886350 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-trusted-ca\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886389 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886392 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886417 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886446 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c38d45aa-0713-4059-8c2d-59a9b1cb5861-machine-approver-tls\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886530 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886597 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d768fc5d-52c2-4901-a7cd-759d26f88251-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886624 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7616472e-472c-4dfa-bf69-97d784e1e42f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886689 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqg47\" (UniqueName: \"kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886723 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886757 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rg56\" (UniqueName: \"kubernetes.io/projected/d768fc5d-52c2-4901-a7cd-759d26f88251-kube-api-access-5rg56\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886786 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xm4\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-kube-api-access-q2xm4\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886820 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cp6s\" (UniqueName: \"kubernetes.io/projected/7616472e-472c-4dfa-bf69-97d784e1e42f-kube-api-access-2cp6s\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886854 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt57s\" (UniqueName: \"kubernetes.io/projected/c38d45aa-0713-4059-8c2d-59a9b1cb5861-kube-api-access-vt57s\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886896 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-encryption-config\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886931 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886966 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.886994 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-node-pullsecrets\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887016 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit-dir\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887063 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887113 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887143 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887175 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887203 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887234 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghmbg\" (UniqueName: \"kubernetes.io/projected/9e86ea88-60d1-4af7-8095-5ee44e176029-kube-api-access-ghmbg\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887258 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqrz\" (UniqueName: \"kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887288 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2vgz\" (UniqueName: \"kubernetes.io/projected/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-kube-api-access-v2vgz\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887322 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.887357 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895157 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b609710f-4a90-417e-9e31-b1a045c1e8a2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895262 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895320 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-serving-cert\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895357 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/814afa6a-716d-4011-89f9-6ccbc336e361-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895399 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27sl\" (UniqueName: \"kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895434 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b609710f-4a90-417e-9e31-b1a045c1e8a2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895464 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895503 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-client\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895529 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895557 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895583 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-dir\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895607 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895630 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-auth-proxy-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895658 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895687 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqmg\" (UniqueName: \"kubernetes.io/projected/4d2da2c2-6056-4902-a20b-19333d24a600-kube-api-access-bvqmg\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895715 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895717 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.895744 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwf2j\" (UniqueName: \"kubernetes.io/projected/b609710f-4a90-417e-9e31-b1a045c1e8a2-kube-api-access-dwf2j\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.896685 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.896737 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.908009 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.908114 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.908593 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.910408 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-auth-proxy-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.910665 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.913209 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hgm9w"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.913809 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.914268 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.914586 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.916509 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-serving-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.916665 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d2da2c2-6056-4902-a20b-19333d24a600-metrics-tls\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.916773 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.916873 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cx5c\" (UniqueName: \"kubernetes.io/projected/26cfd7ef-1024-479e-bdc5-e39429a16ee5-kube-api-access-4cx5c\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.916961 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-policies\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917082 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917170 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-config\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917269 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917352 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-image-import-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917427 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-encryption-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917539 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917626 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.918117 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38d45aa-0713-4059-8c2d-59a9b1cb5861-config\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.918636 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-serving-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.918894 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-serving-cert\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.919237 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-policies\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.919718 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.919820 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7616472e-472c-4dfa-bf69-97d784e1e42f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917232 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920191 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b609710f-4a90-417e-9e31-b1a045c1e8a2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917395 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917622 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e86ea88-60d1-4af7-8095-5ee44e176029-audit-dir\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920518 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-images\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920549 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920574 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920594 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920616 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b7v7\" (UniqueName: \"kubernetes.io/projected/21b53e08-d25e-41ab-a180-4b852eb77c8c-kube-api-access-4b7v7\") pod \"downloads-7954f5f757-hwb2t\" (UID: \"21b53e08-d25e-41ab-a180-4b852eb77c8c\") " pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920635 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-serving-cert\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917449 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920887 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-node-pullsecrets\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920959 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-audit-dir\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.920830 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-config\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.917483 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.921507 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-image-import-ca\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.922160 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7616472e-472c-4dfa-bf69-97d784e1e42f-images\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.923018 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.923272 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.923362 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-etcd-client\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.923774 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.924106 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.924282 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.924666 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-etcd-client\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.924676 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.925199 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.925930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.926685 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-serving-cert\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.927339 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.927607 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.929325 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.929377 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.929402 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.930862 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-trsfj"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.931209 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.931469 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.931960 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.932087 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.942892 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.943747 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.945600 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.960835 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-encryption-config\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.961184 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c38d45aa-0713-4059-8c2d-59a9b1cb5861-machine-approver-tls\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.964564 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-trusted-ca\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.964910 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b609710f-4a90-417e-9e31-b1a045c1e8a2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.966643 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.966775 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.967706 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.973119 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9e86ea88-60d1-4af7-8095-5ee44e176029-encryption-config\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.973571 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2"] Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.975512 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.982764 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:24 crc kubenswrapper[4979]: I0130 21:42:24.998081 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.003223 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.008504 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.010895 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.011180 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.012224 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.013329 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwf2j\" (UniqueName: \"kubernetes.io/projected/b609710f-4a90-417e-9e31-b1a045c1e8a2-kube-api-access-dwf2j\") pod \"openshift-apiserver-operator-796bbdcf4f-m7s7j\" (UID: \"b609710f-4a90-417e-9e31-b1a045c1e8a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.013427 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.013873 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.014801 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.014901 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.014922 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.015918 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.018665 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.019529 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.028498 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.028899 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.028944 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/814afa6a-716d-4011-89f9-6ccbc336e361-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.028973 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j27sl\" (UniqueName: \"kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029001 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029022 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029061 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029087 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvqmg\" (UniqueName: \"kubernetes.io/projected/4d2da2c2-6056-4902-a20b-19333d24a600-kube-api-access-bvqmg\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029106 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029121 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029143 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029158 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029201 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d2da2c2-6056-4902-a20b-19333d24a600-metrics-tls\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029218 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029234 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cx5c\" (UniqueName: \"kubernetes.io/projected/26cfd7ef-1024-479e-bdc5-e39429a16ee5-kube-api-access-4cx5c\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029259 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029279 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029297 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029338 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sztff\" (UniqueName: \"kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029357 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/814afa6a-716d-4011-89f9-6ccbc336e361-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029378 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029396 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/26cfd7ef-1024-479e-bdc5-e39429a16ee5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029418 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029439 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029460 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d768fc5d-52c2-4901-a7cd-759d26f88251-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029520 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqg47\" (UniqueName: \"kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029540 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029556 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rg56\" (UniqueName: \"kubernetes.io/projected/d768fc5d-52c2-4901-a7cd-759d26f88251-kube-api-access-5rg56\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029575 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2xm4\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-kube-api-access-q2xm4\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029620 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029646 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029670 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029702 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029728 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.029744 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.031823 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/814afa6a-716d-4011-89f9-6ccbc336e361-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.033227 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.035047 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.035683 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.035814 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.035827 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.036647 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.037135 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.037850 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d768fc5d-52c2-4901-a7cd-759d26f88251-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.037945 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.038098 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.039188 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.039720 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.040208 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.040424 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjfp6"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.041239 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.042699 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.042715 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.043158 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.043665 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.044279 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045116 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045356 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045450 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/26cfd7ef-1024-479e-bdc5-e39429a16ee5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045761 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045839 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045852 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045944 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/814afa6a-716d-4011-89f9-6ccbc336e361-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.045976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.046144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.046553 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.046880 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.047656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.048322 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.048945 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.054822 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l44fm"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.054872 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.054889 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-969ns"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.055578 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.055709 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.056057 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.056086 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.056637 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.056793 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.057248 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.058646 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.060690 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d2da2c2-6056-4902-a20b-19333d24a600-metrics-tls\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.061684 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.063334 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.065016 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ww6sg"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.069767 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.076212 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.076258 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hwb2t"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.076917 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-trsfj"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.078235 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.080175 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lbd69"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.083981 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2zdrx"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.084140 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.085320 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.085367 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.085471 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.086241 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffscn"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.092043 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.092110 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.094723 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tbr4j"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.096261 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.105087 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-464m7"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.105193 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.105321 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.106135 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.106455 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.106724 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.107664 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.109485 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.113294 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjfp6"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.114585 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.116064 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.122518 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.124545 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.126271 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-969ns"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.127378 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.128228 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lbd69"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.129708 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.131813 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.133585 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-464m7"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.135048 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tbr4j"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.136732 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.137947 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.139118 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.140158 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.145693 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.165637 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.188288 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.207176 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.230476 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.267472 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk5fj\" (UniqueName: \"kubernetes.io/projected/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-kube-api-access-lk5fj\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.286656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt57s\" (UniqueName: \"kubernetes.io/projected/c38d45aa-0713-4059-8c2d-59a9b1cb5861-kube-api-access-vt57s\") pod \"machine-approver-56656f9798-nm27z\" (UID: \"c38d45aa-0713-4059-8c2d-59a9b1cb5861\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.307460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cp6s\" (UniqueName: \"kubernetes.io/projected/7616472e-472c-4dfa-bf69-97d784e1e42f-kube-api-access-2cp6s\") pod \"machine-api-operator-5694c8668f-mr5l2\" (UID: \"7616472e-472c-4dfa-bf69-97d784e1e42f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.338345 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqrz\" (UniqueName: \"kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.346172 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2vgz\" (UniqueName: \"kubernetes.io/projected/daf9c301-ff6e-47d9-a8a0-d88e6cf53d48-kube-api-access-v2vgz\") pod \"apiserver-76f77b778f-tdvvn\" (UID: \"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48\") " pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.356120 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.361584 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghmbg\" (UniqueName: \"kubernetes.io/projected/9e86ea88-60d1-4af7-8095-5ee44e176029-kube-api-access-ghmbg\") pod \"apiserver-7bbb656c7d-zc7hq\" (UID: \"9e86ea88-60d1-4af7-8095-5ee44e176029\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.364796 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.376916 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.386177 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.388862 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b7v7\" (UniqueName: \"kubernetes.io/projected/21b53e08-d25e-41ab-a180-4b852eb77c8c-kube-api-access-4b7v7\") pod \"downloads-7954f5f757-hwb2t\" (UID: \"21b53e08-d25e-41ab-a180-4b852eb77c8c\") " pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.394656 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.408013 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.429232 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.442137 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.453538 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.468410 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.488756 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.511671 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.516363 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.528138 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.546589 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 21:42:25 crc kubenswrapper[4979]: W0130 21:42:25.551503 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc38d45aa_0713_4059_8c2d_59a9b1cb5861.slice/crio-83ce17a74c9caf6841cd98c3af37b3d5536f88aa3af1fbe7e66ccf183a3a4128 WatchSource:0}: Error finding container 83ce17a74c9caf6841cd98c3af37b3d5536f88aa3af1fbe7e66ccf183a3a4128: Status 404 returned error can't find the container with id 83ce17a74c9caf6841cd98c3af37b3d5536f88aa3af1fbe7e66ccf183a3a4128 Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.566140 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.587240 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.598584 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tdvvn"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.605690 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.625645 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.636395 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mr5l2"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.646971 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 21:42:25 crc kubenswrapper[4979]: W0130 21:42:25.662407 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7616472e_472c_4dfa_bf69_97d784e1e42f.slice/crio-b404dbc4f29bee0dc6d6aac3af8a5b63eda098c582d3e0822de24307a9f21dc1 WatchSource:0}: Error finding container b404dbc4f29bee0dc6d6aac3af8a5b63eda098c582d3e0822de24307a9f21dc1: Status 404 returned error can't find the container with id b404dbc4f29bee0dc6d6aac3af8a5b63eda098c582d3e0822de24307a9f21dc1 Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.669073 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.669519 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 21:42:25 crc kubenswrapper[4979]: W0130 21:42:25.685851 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e86ea88_60d1_4af7_8095_5ee44e176029.slice/crio-10484bacde70e407dc6877733466f503013f34bb2a73eae67f568151233af8f6 WatchSource:0}: Error finding container 10484bacde70e407dc6877733466f503013f34bb2a73eae67f568151233af8f6: Status 404 returned error can't find the container with id 10484bacde70e407dc6877733466f503013f34bb2a73eae67f568151233af8f6 Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.686128 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.698885 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hwb2t"] Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.707625 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 21:42:25 crc kubenswrapper[4979]: W0130 21:42:25.710706 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21b53e08_d25e_41ab_a180_4b852eb77c8c.slice/crio-a397818806945dacae5885df09ade6fe6409b73708672ebb09cfbcc980387031 WatchSource:0}: Error finding container a397818806945dacae5885df09ade6fe6409b73708672ebb09cfbcc980387031: Status 404 returned error can't find the container with id a397818806945dacae5885df09ade6fe6409b73708672ebb09cfbcc980387031 Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.741774 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-config\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.741859 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-serving-cert\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.741953 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742012 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742086 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742160 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742181 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jlk\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742197 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742221 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742293 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742314 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742331 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.742381 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcrm\" (UniqueName: \"kubernetes.io/projected/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-kube-api-access-6tcrm\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.745548 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.245521147 +0000 UTC m=+142.206768180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.765861 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.786767 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.806924 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.830375 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844530 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-stats-auth\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844873 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvnk\" (UniqueName: \"kubernetes.io/projected/7a7b036f-4e32-47e9-b700-da7ef3615e4f-kube-api-access-xvvnk\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844903 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-tmpfs\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844921 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngpkp\" (UniqueName: \"kubernetes.io/projected/ed73bac2-f781-4475-b265-8c8820d10e3b-kube-api-access-ngpkp\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.844958 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf74x\" (UniqueName: \"kubernetes.io/projected/5ec159e5-6cc8-4130-a83c-ad402c63e175-kube-api-access-lf74x\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845015 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-config\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845170 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-serving-cert\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845195 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-certs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845227 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df702c9e-2d17-476e-9bbe-d41784bf809b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845264 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845285 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845304 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ad194c8-35db-4a68-9c59-575a8971d714-signing-key\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845332 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845354 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-srv-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845374 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zltvn\" (UniqueName: \"kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845392 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-metrics-certs\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845411 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845432 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2063d8fc-0614-40e7-be84-ebfbda9acd89-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845451 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxdm\" (UniqueName: \"kubernetes.io/projected/38abc107-38ba-4e77-b00f-eece6eb28537-kube-api-access-6dxdm\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845469 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed73bac2-f781-4475-b265-8c8820d10e3b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845504 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845521 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-config\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845550 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-proxy-tls\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845567 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-srv-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845586 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-client\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845619 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5jlk\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845639 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845657 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed73bac2-f781-4475-b265-8c8820d10e3b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845673 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ad194c8-35db-4a68-9c59-575a8971d714-signing-cabundle\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845700 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df702c9e-2d17-476e-9bbe-d41784bf809b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845736 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845754 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llcrt\" (UniqueName: \"kubernetes.io/projected/241b3d1c-56ec-4088-bcfa-bea0aecea050-kube-api-access-llcrt\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845772 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845787 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2063d8fc-0614-40e7-be84-ebfbda9acd89-config\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845805 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f0c12f1-c780-4020-921b-11e410503db3-proxy-tls\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845822 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg6wl\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-kube-api-access-jg6wl\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845857 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5rcg\" (UniqueName: \"kubernetes.io/projected/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-kube-api-access-s5rcg\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845878 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-registration-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845894 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7638c8d5-0616-4612-9d15-7594e4f74184-serving-cert\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845926 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tcrm\" (UniqueName: \"kubernetes.io/projected/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-kube-api-access-6tcrm\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845942 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9svw7\" (UniqueName: \"kubernetes.io/projected/4f0c12f1-c780-4020-921b-11e410503db3-kube-api-access-9svw7\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845959 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec159e5-6cc8-4130-a83c-ad402c63e175-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.845983 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7tr\" (UniqueName: \"kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846000 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846016 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846138 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jwr\" (UniqueName: \"kubernetes.io/projected/4334e640-e3c2-4238-b7da-85e73bda80af-kube-api-access-v2jwr\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-socket-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846296 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-profile-collector-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846314 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kndnb\" (UniqueName: \"kubernetes.io/projected/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-kube-api-access-kndnb\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846356 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dda3a423-1b53-4e85-9ef1-123fe54ceb98-metrics-tls\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846375 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.846413 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df702c9e-2d17-476e-9bbe-d41784bf809b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847220 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-mountpoint-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847286 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq5s2\" (UniqueName: \"kubernetes.io/projected/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-kube-api-access-lq5s2\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847308 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpx2n\" (UniqueName: \"kubernetes.io/projected/7ad194c8-35db-4a68-9c59-575a8971d714-kube-api-access-xpx2n\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.847370 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.347332944 +0000 UTC m=+142.308580147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847430 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847504 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7638c8d5-0616-4612-9d15-7594e4f74184-config\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847604 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847640 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h7j5\" (UniqueName: \"kubernetes.io/projected/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-kube-api-access-6h7j5\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847672 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f0c12f1-c780-4020-921b-11e410503db3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847699 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a7b036f-4e32-47e9-b700-da7ef3615e4f-cert\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847773 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2063d8fc-0614-40e7-be84-ebfbda9acd89-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847800 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-config\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847825 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda3a423-1b53-4e85-9ef1-123fe54ceb98-trusted-ca\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847858 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847885 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-csi-data-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847911 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-apiservice-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847957 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-default-certificate\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.847990 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8mn\" (UniqueName: \"kubernetes.io/projected/e7334e56-32c0-40f4-b60d-afab26024b6a-kube-api-access-fr8mn\") pod \"migrator-59844c95c7-s86jb\" (UID: \"e7334e56-32c0-40f4-b60d-afab26024b6a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848019 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848071 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848094 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-serving-cert\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848121 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8gt4\" (UniqueName: \"kubernetes.io/projected/7638c8d5-0616-4612-9d15-7594e4f74184-kube-api-access-q8gt4\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848146 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-images\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848198 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848240 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4334e640-e3c2-4238-b7da-85e73bda80af-service-ca-bundle\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848270 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7pb\" (UniqueName: \"kubernetes.io/projected/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-kube-api-access-wq7pb\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848299 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9jjs\" (UniqueName: \"kubernetes.io/projected/f65257ab-42e6-4f77-ab65-f9f762c8ae42-kube-api-access-n9jjs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848327 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38abc107-38ba-4e77-b00f-eece6eb28537-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848376 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848419 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848444 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-plugins-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848474 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlg7d\" (UniqueName: \"kubernetes.io/projected/0f7429df-aeda-4c76-9051-401488358e6c-kube-api-access-hlg7d\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848502 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848569 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjm7w\" (UniqueName: \"kubernetes.io/projected/66910c2a-724c-42a8-8511-a8ee6de7d140-kube-api-access-cjm7w\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848609 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-service-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848665 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-webhook-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848693 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848748 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848774 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k866t\" (UniqueName: \"kubernetes.io/projected/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-kube-api-access-k866t\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848799 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38abc107-38ba-4e77-b00f-eece6eb28537-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.848839 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.851338 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-config\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.852581 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.352568388 +0000 UTC m=+142.313815421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.852631 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.853148 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.853871 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.854185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.855069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.859638 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.860628 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.863922 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.864342 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-serving-cert\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.865825 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.875511 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" event={"ID":"c38d45aa-0713-4059-8c2d-59a9b1cb5861","Type":"ContainerStarted","Data":"4ccef6209b460edd87c24008fcd6e78ad0415660bdd14124b8ede142bd3080ba"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.875614 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" event={"ID":"c38d45aa-0713-4059-8c2d-59a9b1cb5861","Type":"ContainerStarted","Data":"83ce17a74c9caf6841cd98c3af37b3d5536f88aa3af1fbe7e66ccf183a3a4128"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.877729 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" event={"ID":"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48","Type":"ContainerStarted","Data":"fbcefed559af56f817450efb27400d18fbce7bb268fc0588724c1105ad41d38b"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.882573 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" event={"ID":"9e86ea88-60d1-4af7-8095-5ee44e176029","Type":"ContainerStarted","Data":"10484bacde70e407dc6877733466f503013f34bb2a73eae67f568151233af8f6"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.884410 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hwb2t" event={"ID":"21b53e08-d25e-41ab-a180-4b852eb77c8c","Type":"ContainerStarted","Data":"1fb05cef810c91cb605dd4c3bc4b66f2e11e171e5bc7b3102d68194e8af8b49d"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.884448 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hwb2t" event={"ID":"21b53e08-d25e-41ab-a180-4b852eb77c8c","Type":"ContainerStarted","Data":"a397818806945dacae5885df09ade6fe6409b73708672ebb09cfbcc980387031"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.885349 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.888059 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" event={"ID":"b609710f-4a90-417e-9e31-b1a045c1e8a2","Type":"ContainerStarted","Data":"494f52ceee5f880a7a8f0ddabc3bd4351cacf36bbd408a5967adf19cdb046599"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.888089 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" event={"ID":"b609710f-4a90-417e-9e31-b1a045c1e8a2","Type":"ContainerStarted","Data":"88ee093657069d3707223ee8495ad4585134244f60a83a3e600efd920298b96d"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.890141 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" event={"ID":"7616472e-472c-4dfa-bf69-97d784e1e42f","Type":"ContainerStarted","Data":"9fcd808e8139932180f2fb427fe37499a4e962be7e99a65174ca99da5d94ff10"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.890171 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" event={"ID":"7616472e-472c-4dfa-bf69-97d784e1e42f","Type":"ContainerStarted","Data":"b404dbc4f29bee0dc6d6aac3af8a5b63eda098c582d3e0822de24307a9f21dc1"} Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.905543 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.916001 4979 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.916120 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config podName:ff61cd4b-2b9f-4588-be96-10038ccc4a92 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.416098247 +0000 UTC m=+142.377345280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config") pod "controller-manager-879f6c89f-4zkpx" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.920858 4979 secret.go:188] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.921459 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert podName:45cde1ce-04ec-4fdd-bfc0-10d072a9eff1 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.421428425 +0000 UTC m=+142.382675458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert") pod "console-operator-58897d9998-l44fm" (UID: "45cde1ce-04ec-4fdd-bfc0-10d072a9eff1") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.921549 4979 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.921649 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config podName:45cde1ce-04ec-4fdd-bfc0-10d072a9eff1 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.42162773 +0000 UTC m=+142.382874933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config") pod "console-operator-58897d9998-l44fm" (UID: "45cde1ce-04ec-4fdd-bfc0-10d072a9eff1") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.926390 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.927092 4979 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.927179 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca podName:ff61cd4b-2b9f-4588-be96-10038ccc4a92 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.427157623 +0000 UTC m=+142.388404656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca") pod "controller-manager-879f6c89f-4zkpx" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.946447 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.950317 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.450274852 +0000 UTC m=+142.411521885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950370 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df702c9e-2d17-476e-9bbe-d41784bf809b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950476 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llcrt\" (UniqueName: \"kubernetes.io/projected/241b3d1c-56ec-4088-bcfa-bea0aecea050-kube-api-access-llcrt\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950509 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg6wl\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-kube-api-access-jg6wl\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950784 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950808 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5rcg\" (UniqueName: \"kubernetes.io/projected/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-kube-api-access-s5rcg\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950913 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-registration-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951334 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-registration-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951368 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df702c9e-2d17-476e-9bbe-d41784bf809b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.950943 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2063d8fc-0614-40e7-be84-ebfbda9acd89-config\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951477 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f0c12f1-c780-4020-921b-11e410503db3-proxy-tls\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951513 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7638c8d5-0616-4612-9d15-7594e4f74184-serving-cert\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951554 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9svw7\" (UniqueName: \"kubernetes.io/projected/4f0c12f1-c780-4020-921b-11e410503db3-kube-api-access-9svw7\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951576 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2063d8fc-0614-40e7-be84-ebfbda9acd89-config\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951588 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec159e5-6cc8-4130-a83c-ad402c63e175-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951628 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951650 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951680 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b7tr\" (UniqueName: \"kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951701 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jwr\" (UniqueName: \"kubernetes.io/projected/4334e640-e3c2-4238-b7da-85e73bda80af-kube-api-access-v2jwr\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951732 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-profile-collector-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951752 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kndnb\" (UniqueName: \"kubernetes.io/projected/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-kube-api-access-kndnb\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951783 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-socket-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951801 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df702c9e-2d17-476e-9bbe-d41784bf809b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951818 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dda3a423-1b53-4e85-9ef1-123fe54ceb98-metrics-tls\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951836 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951851 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-mountpoint-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951872 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpx2n\" (UniqueName: \"kubernetes.io/projected/7ad194c8-35db-4a68-9c59-575a8971d714-kube-api-access-xpx2n\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951892 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq5s2\" (UniqueName: \"kubernetes.io/projected/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-kube-api-access-lq5s2\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951934 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951963 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7638c8d5-0616-4612-9d15-7594e4f74184-config\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951988 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-mountpoint-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.951998 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952113 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h7j5\" (UniqueName: \"kubernetes.io/projected/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-kube-api-access-6h7j5\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952128 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-socket-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952144 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f0c12f1-c780-4020-921b-11e410503db3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952418 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a7b036f-4e32-47e9-b700-da7ef3615e4f-cert\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952450 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda3a423-1b53-4e85-9ef1-123fe54ceb98-trusted-ca\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952478 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2063d8fc-0614-40e7-be84-ebfbda9acd89-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952499 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-config\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-csi-data-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952549 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-apiservice-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-default-certificate\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952613 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8mn\" (UniqueName: \"kubernetes.io/projected/e7334e56-32c0-40f4-b60d-afab26024b6a-kube-api-access-fr8mn\") pod \"migrator-59844c95c7-s86jb\" (UID: \"e7334e56-32c0-40f4-b60d-afab26024b6a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952637 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952676 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-images\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954277 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954379 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-serving-cert\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954389 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-csi-data-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954411 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8gt4\" (UniqueName: \"kubernetes.io/projected/7638c8d5-0616-4612-9d15-7594e4f74184-kube-api-access-q8gt4\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954605 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4334e640-e3c2-4238-b7da-85e73bda80af-service-ca-bundle\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954674 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7pb\" (UniqueName: \"kubernetes.io/projected/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-kube-api-access-wq7pb\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954710 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9jjs\" (UniqueName: \"kubernetes.io/projected/f65257ab-42e6-4f77-ab65-f9f762c8ae42-kube-api-access-n9jjs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954765 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38abc107-38ba-4e77-b00f-eece6eb28537-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954841 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-plugins-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.954871 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlg7d\" (UniqueName: \"kubernetes.io/projected/0f7429df-aeda-4c76-9051-401488358e6c-kube-api-access-hlg7d\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.955015 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f0c12f1-c780-4020-921b-11e410503db3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.952712 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: E0130 21:42:25.957904 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.457878763 +0000 UTC m=+142.419125796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.957973 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0f7429df-aeda-4c76-9051-401488358e6c-plugins-dir\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.958101 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4334e640-e3c2-4238-b7da-85e73bda80af-service-ca-bundle\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.958712 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-config\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959290 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjm7w\" (UniqueName: \"kubernetes.io/projected/66910c2a-724c-42a8-8511-a8ee6de7d140-kube-api-access-cjm7w\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959319 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-service-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959355 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-webhook-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959372 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k866t\" (UniqueName: \"kubernetes.io/projected/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-kube-api-access-k866t\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38abc107-38ba-4e77-b00f-eece6eb28537-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959501 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-stats-auth\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959559 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvvnk\" (UniqueName: \"kubernetes.io/projected/7a7b036f-4e32-47e9-b700-da7ef3615e4f-kube-api-access-xvvnk\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959583 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-tmpfs\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959606 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngpkp\" (UniqueName: \"kubernetes.io/projected/ed73bac2-f781-4475-b265-8c8820d10e3b-kube-api-access-ngpkp\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf74x\" (UniqueName: \"kubernetes.io/projected/5ec159e5-6cc8-4130-a83c-ad402c63e175-kube-api-access-lf74x\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959726 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-certs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959765 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df702c9e-2d17-476e-9bbe-d41784bf809b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959790 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959825 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ad194c8-35db-4a68-9c59-575a8971d714-signing-key\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.959850 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960142 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-srv-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960182 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zltvn\" (UniqueName: \"kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960208 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-metrics-certs\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960247 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960448 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2063d8fc-0614-40e7-be84-ebfbda9acd89-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960475 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxdm\" (UniqueName: \"kubernetes.io/projected/38abc107-38ba-4e77-b00f-eece6eb28537-kube-api-access-6dxdm\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960517 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed73bac2-f781-4475-b265-8c8820d10e3b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960548 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960611 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-config\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960655 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-proxy-tls\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960681 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-srv-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960702 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-client\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960904 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ad194c8-35db-4a68-9c59-575a8971d714-signing-cabundle\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960929 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.960977 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed73bac2-f781-4475-b265-8c8820d10e3b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.961890 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-service-ca\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.962222 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda3a423-1b53-4e85-9ef1-123fe54ceb98-trusted-ca\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.962448 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dda3a423-1b53-4e85-9ef1-123fe54ceb98-metrics-tls\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.962702 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-tmpfs\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.964061 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38abc107-38ba-4e77-b00f-eece6eb28537-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.964300 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-config\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.965475 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed73bac2-f781-4475-b265-8c8820d10e3b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.966488 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-metrics-certs\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.968112 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.968827 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df702c9e-2d17-476e-9bbe-d41784bf809b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.968997 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-default-certificate\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.970486 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.972900 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-serving-cert\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.973125 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38abc107-38ba-4e77-b00f-eece6eb28537-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.973464 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f0c12f1-c780-4020-921b-11e410503db3-proxy-tls\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.974772 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed73bac2-f781-4475-b265-8c8820d10e3b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.979284 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/66910c2a-724c-42a8-8511-a8ee6de7d140-etcd-client\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.976503 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2063d8fc-0614-40e7-be84-ebfbda9acd89-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.983174 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4334e640-e3c2-4238-b7da-85e73bda80af-stats-auth\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.985604 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 21:42:25 crc kubenswrapper[4979]: I0130 21:42:25.993699 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.007272 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.019399 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.024281 4979 request.go:700] Waited for 1.008803642s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.026748 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.030849 4979 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.030986 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert podName:cc25d794-4ead-4436-a026-179f655c13d4 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.530946474 +0000 UTC m=+142.492193507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert") pod "console-f9d7485db-h6sv5" (UID: "cc25d794-4ead-4436-a026-179f655c13d4") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.034918 4979 secret.go:188] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.035014 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert podName:d768fc5d-52c2-4901-a7cd-759d26f88251 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.534992827 +0000 UTC m=+142.496239860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert") pod "openshift-config-operator-7777fb866f-dqtmx" (UID: "d768fc5d-52c2-4901-a7cd-759d26f88251") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.047753 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.062796 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.063335 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.56330069 +0000 UTC m=+142.524547723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.064338 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.065000 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.564991946 +0000 UTC m=+142.526238979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.067508 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.072732 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec159e5-6cc8-4130-a83c-ad402c63e175-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.085248 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.106676 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.126179 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.147251 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.165084 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.165251 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.665212289 +0000 UTC m=+142.626459322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.165811 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.166719 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.666698461 +0000 UTC m=+142.627945494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.168369 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.176760 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-webhook-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.179781 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-apiservice-cert\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.202567 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j27sl\" (UniqueName: \"kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl\") pod \"route-controller-manager-6576b87f9c-x8j5s\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.220580 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.267012 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.267447 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2xm4\" (UniqueName: \"kubernetes.io/projected/814afa6a-716d-4011-89f9-6ccbc336e361-kube-api-access-q2xm4\") pod \"cluster-image-registry-operator-dc59b4c8b-pj644\" (UID: \"814afa6a-716d-4011-89f9-6ccbc336e361\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.267640 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.767616603 +0000 UTC m=+142.728863646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.273956 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.284502 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sztff\" (UniqueName: \"kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff\") pod \"oauth-openshift-558db77b4-8pq8k\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.293289 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.303384 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvqmg\" (UniqueName: \"kubernetes.io/projected/4d2da2c2-6056-4902-a20b-19333d24a600-kube-api-access-bvqmg\") pod \"dns-operator-744455d44c-ww6sg\" (UID: \"4d2da2c2-6056-4902-a20b-19333d24a600\") " pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.306211 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.306767 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.326682 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.346216 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.358407 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.370465 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.370939 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.870923711 +0000 UTC m=+142.832170744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.373963 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.377336 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.456690 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.456691 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.460607 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.465727 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.471064 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.471402 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.471591 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.471721 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.471823 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.474543 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cx5c\" (UniqueName: \"kubernetes.io/projected/26cfd7ef-1024-479e-bdc5-e39429a16ee5-kube-api-access-4cx5c\") pod \"cluster-samples-operator-665b6dd947-hm7cc\" (UID: \"26cfd7ef-1024-479e-bdc5-e39429a16ee5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.476721 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqg47\" (UniqueName: \"kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.476871 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:26.976846581 +0000 UTC m=+142.938093614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.489400 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.497488 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-profile-collector-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.497495 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.500219 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.506783 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.519045 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ad194c8-35db-4a68-9c59-575a8971d714-signing-key\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.525922 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.545844 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.547990 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.557121 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.561625 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.566103 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.574876 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.575077 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.575148 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.576316 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.076266462 +0000 UTC m=+143.037513495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.581002 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.585695 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 21:42:26 crc kubenswrapper[4979]: W0130 21:42:26.588016 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod828e6466_447a_47f9_9727_3992db7c27c9.slice/crio-deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096 WatchSource:0}: Error finding container deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096: Status 404 returned error can't find the container with id deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096 Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.596194 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ad194c8-35db-4a68-9c59-575a8971d714-signing-cabundle\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.604610 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ww6sg"] Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.605869 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.627078 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.647157 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.658679 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7638c8d5-0616-4612-9d15-7594e4f74184-serving-cert\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.666776 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.669846 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7638c8d5-0616-4612-9d15-7594e4f74184-config\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.679578 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.679768 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.179737045 +0000 UTC m=+143.140984078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.680005 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.681121 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.181095613 +0000 UTC m=+143.142342646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.690863 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.707368 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.721464 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.730602 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.747019 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.757162 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/241b3d1c-56ec-4088-bcfa-bea0aecea050-srv-cert\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.766720 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.769590 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-images\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.782299 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.782424 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.282396145 +0000 UTC m=+143.243643188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.785797 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.786521 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.286494019 +0000 UTC m=+143.247741052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.793570 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.798683 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644"] Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.800367 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-proxy-tls\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.808147 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.809235 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc"] Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.827217 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.844532 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-srv-cert\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.846319 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.865910 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.878848 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a7b036f-4e32-47e9-b700-da7ef3615e4f-cert\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.886262 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.887499 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.887691 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.387662178 +0000 UTC m=+143.348909211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.888018 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.888581 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.388565703 +0000 UTC m=+143.349812736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.907098 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.916412 4979 generic.go:334] "Generic (PLEG): container finished" podID="daf9c301-ff6e-47d9-a8a0-d88e6cf53d48" containerID="482279a721847b918c5fc4616a62f3a67d742bc6c5938bc4828e9ca15dcb97ba" exitCode=0 Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.917213 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" event={"ID":"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48","Type":"ContainerDied","Data":"482279a721847b918c5fc4616a62f3a67d742bc6c5938bc4828e9ca15dcb97ba"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.919864 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" event={"ID":"814afa6a-716d-4011-89f9-6ccbc336e361","Type":"ContainerStarted","Data":"403cfba5196c5bf67f4cf059ebba316fa3e60cbe9e05f15ef7389dc5b80b5070"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.920930 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" event={"ID":"4d2da2c2-6056-4902-a20b-19333d24a600","Type":"ContainerStarted","Data":"06fb6f96cf1b0beeb7c7f19e2a0d2bdb71e4fd36261a8d789489994922e28ca6"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.926370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" event={"ID":"828e6466-447a-47f9-9727-3992db7c27c9","Type":"ContainerStarted","Data":"1a95ca4d3d52fa45ac0c03598e04f51654e2ae85b01f82e3a46a20846a9d630c"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.926424 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" event={"ID":"828e6466-447a-47f9-9727-3992db7c27c9","Type":"ContainerStarted","Data":"deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.926910 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.928892 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.928904 4979 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-x8j5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.928999 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.930784 4979 generic.go:334] "Generic (PLEG): container finished" podID="9e86ea88-60d1-4af7-8095-5ee44e176029" containerID="a3d00ba6590a18fe81a8591db85e915deceb9dbe89e5165b070d6df1277064b1" exitCode=0 Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.930857 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" event={"ID":"9e86ea88-60d1-4af7-8095-5ee44e176029","Type":"ContainerDied","Data":"a3d00ba6590a18fe81a8591db85e915deceb9dbe89e5165b070d6df1277064b1"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.933742 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" event={"ID":"de06742d-2533-4510-abec-ff0f35d84a45","Type":"ContainerStarted","Data":"246d40c550fcc6c9fdc34ebbfdb6355e89a001f7901886dab00180fbdbb32fa5"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.938596 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" event={"ID":"c38d45aa-0713-4059-8c2d-59a9b1cb5861","Type":"ContainerStarted","Data":"0d246b15f86d8f2da268ea26abfe13d934e17724e430a831e1809a5e4c519a8d"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.943304 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" event={"ID":"7616472e-472c-4dfa-bf69-97d784e1e42f","Type":"ContainerStarted","Data":"4b1948e08ff729819e269849dbc9d1ac0e3c6abb56999ef9b7f0cf2ca265909c"} Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.943989 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.947441 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.947495 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.947771 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.951897 4979 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.952000 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume podName:ebc2a677-6e7a-41ce-a3f4-063acddaa66b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.451977798 +0000 UTC m=+143.413224841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume") pod "dns-default-464m7" (UID: "ebc2a677-6e7a-41ce-a3f4-063acddaa66b") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.961140 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-certs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.963581 4979 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.963666 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls podName:ebc2a677-6e7a-41ce-a3f4-063acddaa66b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.463646761 +0000 UTC m=+143.424893794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls") pod "dns-default-464m7" (UID: "ebc2a677-6e7a-41ce-a3f4-063acddaa66b") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.965206 4979 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.965322 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token podName:f65257ab-42e6-4f77-ab65-f9f762c8ae42 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.465287446 +0000 UTC m=+143.426534479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token") pod "machine-config-server-2zdrx" (UID: "f65257ab-42e6-4f77-ab65-f9f762c8ae42") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.965401 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.986798 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 21:42:26 crc kubenswrapper[4979]: I0130 21:42:26.989630 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:26 crc kubenswrapper[4979]: E0130 21:42:26.991803 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.491779629 +0000 UTC m=+143.453026662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.006974 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.027723 4979 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.044304 4979 request.go:700] Waited for 1.93723165s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.046789 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.066956 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.086732 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.092192 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.092747 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.592721402 +0000 UTC m=+143.553968435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.106167 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.121823 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d768fc5d-52c2-4901-a7cd-759d26f88251-serving-cert\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.126162 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.136431 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-config\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.166682 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.186239 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.194925 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.195320 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.695286859 +0000 UTC m=+143.656533892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.195527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.195947 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.695928177 +0000 UTC m=+143.657175210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.224095 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tcrm\" (UniqueName: \"kubernetes.io/projected/0f9f4663-eacb-4b8f-b468-a1ee9e078f99-kube-api-access-6tcrm\") pod \"authentication-operator-69f744f599-ffscn\" (UID: \"0f9f4663-eacb-4b8f-b468-a1ee9e078f99\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.249834 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.267074 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.268830 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5jlk\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.277323 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.296835 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.297478 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.797448946 +0000 UTC m=+143.758695999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.311894 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llcrt\" (UniqueName: \"kubernetes.io/projected/241b3d1c-56ec-4088-bcfa-bea0aecea050-kube-api-access-llcrt\") pod \"catalog-operator-68c6474976-cxp2c\" (UID: \"241b3d1c-56ec-4088-bcfa-bea0aecea050\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.321986 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg6wl\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-kube-api-access-jg6wl\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.346621 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5rcg\" (UniqueName: \"kubernetes.io/projected/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-kube-api-access-s5rcg\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.363930 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9svw7\" (UniqueName: \"kubernetes.io/projected/4f0c12f1-c780-4020-921b-11e410503db3-kube-api-access-9svw7\") pod \"machine-config-controller-84d6567774-nzzr2\" (UID: \"4f0c12f1-c780-4020-921b-11e410503db3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.382356 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dda3a423-1b53-4e85-9ef1-123fe54ceb98-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cckwg\" (UID: \"dda3a423-1b53-4e85-9ef1-123fe54ceb98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.399567 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.400200 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:27.900172639 +0000 UTC m=+143.861419672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.404988 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b7tr\" (UniqueName: \"kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr\") pod \"collect-profiles-29496810-qqbl6\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.411594 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.425804 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jwr\" (UniqueName: \"kubernetes.io/projected/4334e640-e3c2-4238-b7da-85e73bda80af-kube-api-access-v2jwr\") pod \"router-default-5444994796-hgm9w\" (UID: \"4334e640-e3c2-4238-b7da-85e73bda80af\") " pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.442542 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.448698 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kndnb\" (UniqueName: \"kubernetes.io/projected/82c82db9-e29a-4e8f-a5d0-04baf5a8c54f-kube-api-access-kndnb\") pod \"machine-config-operator-74547568cd-bkvc5\" (UID: \"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.464942 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpx2n\" (UniqueName: \"kubernetes.io/projected/7ad194c8-35db-4a68-9c59-575a8971d714-kube-api-access-xpx2n\") pod \"service-ca-9c57cc56f-cjfp6\" (UID: \"7ad194c8-35db-4a68-9c59-575a8971d714\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.472136 4979 secret.go:188] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.472248 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert podName:45cde1ce-04ec-4fdd-bfc0-10d072a9eff1 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.472223732 +0000 UTC m=+144.433470765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert") pod "console-operator-58897d9998-l44fm" (UID: "45cde1ce-04ec-4fdd-bfc0-10d072a9eff1") : failed to sync secret cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.474472 4979 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.474557 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca podName:ff61cd4b-2b9f-4588-be96-10038ccc4a92 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.474537956 +0000 UTC m=+144.435784989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca") pod "controller-manager-879f6c89f-4zkpx" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.516789 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.516977 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.517362 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.517462 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.517512 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.017483005 +0000 UTC m=+143.978730098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.517649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.518614 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-config-volume\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.519082 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.523890 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f65257ab-42e6-4f77-ab65-f9f762c8ae42-node-bootstrap-token\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.526261 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebc2a677-6e7a-41ce-a3f4-063acddaa66b-metrics-tls\") pod \"dns-default-464m7\" (UID: \"ebc2a677-6e7a-41ce-a3f4-063acddaa66b\") " pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.551656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h7j5\" (UniqueName: \"kubernetes.io/projected/6952a3c6-a471-489c-ba9a-9e4b5e9ac362-kube-api-access-6h7j5\") pod \"packageserver-d55dfcdfc-6285m\" (UID: \"6952a3c6-a471-489c-ba9a-9e4b5e9ac362\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.555558 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8gt4\" (UniqueName: \"kubernetes.io/projected/7638c8d5-0616-4612-9d15-7594e4f74184-kube-api-access-q8gt4\") pod \"service-ca-operator-777779d784-2vcpm\" (UID: \"7638c8d5-0616-4612-9d15-7594e4f74184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.557841 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq5s2\" (UniqueName: \"kubernetes.io/projected/f1ebd25b-fae4-4659-ab8c-e57b0e9d9564-kube-api-access-lq5s2\") pod \"olm-operator-6b444d44fb-j5jdh\" (UID: \"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.559751 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.562750 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlg7d\" (UniqueName: \"kubernetes.io/projected/0f7429df-aeda-4c76-9051-401488358e6c-kube-api-access-hlg7d\") pod \"csi-hostpathplugin-tbr4j\" (UID: \"0f7429df-aeda-4c76-9051-401488358e6c\") " pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.571864 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.590511 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.592374 4979 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.592454 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert podName:cc25d794-4ead-4436-a026-179f655c13d4 nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.592426728 +0000 UTC m=+144.553673761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert") pod "console-f9d7485db-h6sv5" (UID: "cc25d794-4ead-4436-a026-179f655c13d4") : failed to sync configmap cache: timed out waiting for the condition Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.599587 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9jjs\" (UniqueName: \"kubernetes.io/projected/f65257ab-42e6-4f77-ab65-f9f762c8ae42-kube-api-access-n9jjs\") pod \"machine-config-server-2zdrx\" (UID: \"f65257ab-42e6-4f77-ab65-f9f762c8ae42\") " pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.602193 4979 csr.go:261] certificate signing request csr-rvdws is approved, waiting to be issued Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.604075 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7pb\" (UniqueName: \"kubernetes.io/projected/6ebf43de-28a1-4cb6-a008-7bcc970b96ac-kube-api-access-wq7pb\") pod \"control-plane-machine-set-operator-78cbb6b69f-rthrv\" (UID: \"6ebf43de-28a1-4cb6-a008-7bcc970b96ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.608934 4979 csr.go:257] certificate signing request csr-rvdws is issued Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.617118 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.620315 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-464m7" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.620733 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8mn\" (UniqueName: \"kubernetes.io/projected/e7334e56-32c0-40f4-b60d-afab26024b6a-kube-api-access-fr8mn\") pod \"migrator-59844c95c7-s86jb\" (UID: \"e7334e56-32c0-40f4-b60d-afab26024b6a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.620658 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.621173 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.121149043 +0000 UTC m=+144.082396076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.627756 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.632185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rg56\" (UniqueName: \"kubernetes.io/projected/d768fc5d-52c2-4901-a7cd-759d26f88251-kube-api-access-5rg56\") pod \"openshift-config-operator-7777fb866f-dqtmx\" (UID: \"d768fc5d-52c2-4901-a7cd-759d26f88251\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.636336 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9xcb5\" (UID: \"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.654574 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.667489 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.723012 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.723200 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.223175715 +0000 UTC m=+144.184422758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.724236 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.724365 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.724836 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.224822471 +0000 UTC m=+144.186069504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.754517 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.767428 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.776739 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvvnk\" (UniqueName: \"kubernetes.io/projected/7a7b036f-4e32-47e9-b700-da7ef3615e4f-kube-api-access-xvvnk\") pod \"ingress-canary-lbd69\" (UID: \"7a7b036f-4e32-47e9-b700-da7ef3615e4f\") " pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.785826 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2063d8fc-0614-40e7-be84-ebfbda9acd89-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tpkqd\" (UID: \"2063d8fc-0614-40e7-be84-ebfbda9acd89\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.797908 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zltvn\" (UniqueName: \"kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn\") pod \"marketplace-operator-79b997595-4lzp5\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.802779 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k866t\" (UniqueName: \"kubernetes.io/projected/531bdeb2-b55c-4a3b-8fb5-1dca8478c479-kube-api-access-k866t\") pod \"multus-admission-controller-857f4d67dd-969ns\" (UID: \"531bdeb2-b55c-4a3b-8fb5-1dca8478c479\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.803307 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjm7w\" (UniqueName: \"kubernetes.io/projected/66910c2a-724c-42a8-8511-a8ee6de7d140-kube-api-access-cjm7w\") pod \"etcd-operator-b45778765-trsfj\" (UID: \"66910c2a-724c-42a8-8511-a8ee6de7d140\") " pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.803925 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df702c9e-2d17-476e-9bbe-d41784bf809b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66dgs\" (UID: \"df702c9e-2d17-476e-9bbe-d41784bf809b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.804621 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngpkp\" (UniqueName: \"kubernetes.io/projected/ed73bac2-f781-4475-b265-8c8820d10e3b-kube-api-access-ngpkp\") pod \"openshift-controller-manager-operator-756b6f6bc6-trhfm\" (UID: \"ed73bac2-f781-4475-b265-8c8820d10e3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.807768 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf74x\" (UniqueName: \"kubernetes.io/projected/5ec159e5-6cc8-4130-a83c-ad402c63e175-kube-api-access-lf74x\") pod \"package-server-manager-789f6589d5-d8kf5\" (UID: \"5ec159e5-6cc8-4130-a83c-ad402c63e175\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.809486 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.824289 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxdm\" (UniqueName: \"kubernetes.io/projected/38abc107-38ba-4e77-b00f-eece6eb28537-kube-api-access-6dxdm\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtsdv\" (UID: \"38abc107-38ba-4e77-b00f-eece6eb28537\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.825926 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2zdrx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.826024 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.826345 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.826613 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.326582666 +0000 UTC m=+144.287829719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.826731 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.827186 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.327177143 +0000 UTC m=+144.288424186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.838184 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.849772 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.849885 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lbd69" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.852110 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.866277 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.870310 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.886545 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.901505 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.901665 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.903979 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.928382 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:27 crc kubenswrapper[4979]: E0130 21:42:27.929516 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.429492714 +0000 UTC m=+144.390739757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.937418 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.946597 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:27 crc kubenswrapper[4979]: I0130 21:42:27.960568 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podStartSLOduration=122.960538963 podStartE2EDuration="2m2.960538963s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:27.945091845 +0000 UTC m=+143.906338878" watchObservedRunningTime="2026-01-30 21:42:27.960538963 +0000 UTC m=+143.921785996" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.041556 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.041785 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.042086 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.542067158 +0000 UTC m=+144.503314191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.076073 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.082630 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffscn"] Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.100807 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" event={"ID":"9e86ea88-60d1-4af7-8095-5ee44e176029","Type":"ContainerStarted","Data":"5a4782cd5642ee53b3e77a36068ed257bcf3fcb651cda8c1cd1324fc8f074ca4"} Jan 30 21:42:28 crc kubenswrapper[4979]: W0130 21:42:28.133704 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f9f4663_eacb_4b8f_b468_a1ee9e078f99.slice/crio-c61ee2aa7afb4a607903ee7bd9b0998447f3a3c9928407c584bb9b4810e3a29a WatchSource:0}: Error finding container c61ee2aa7afb4a607903ee7bd9b0998447f3a3c9928407c584bb9b4810e3a29a: Status 404 returned error can't find the container with id c61ee2aa7afb4a607903ee7bd9b0998447f3a3c9928407c584bb9b4810e3a29a Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.136206 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6"] Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.143075 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.643022932 +0000 UTC m=+144.604269965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.142836 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.143710 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.145286 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.645276774 +0000 UTC m=+144.606523807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.200642 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" event={"ID":"4d2da2c2-6056-4902-a20b-19333d24a600","Type":"ContainerStarted","Data":"d0dc7790a29609475ad50a49c09ae26499280e6187466ce0945f8ee13190eded"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.200714 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" event={"ID":"4d2da2c2-6056-4902-a20b-19333d24a600","Type":"ContainerStarted","Data":"6017def96b6caa60065d5429a60ce72361ce49cd895b8bb21850f07af87fcac5"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.204942 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2zdrx" event={"ID":"f65257ab-42e6-4f77-ab65-f9f762c8ae42","Type":"ContainerStarted","Data":"72601cf56f002824175ecf09efe07e763f78bbb4a3525556dc1397d3e6a7cef6"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.208455 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" event={"ID":"814afa6a-716d-4011-89f9-6ccbc336e361","Type":"ContainerStarted","Data":"20579b9842ae5964bc446dd49eeed58edd6c727d5884cdd2adf2f376b145b284"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.217019 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" event={"ID":"de06742d-2533-4510-abec-ff0f35d84a45","Type":"ContainerStarted","Data":"81e7ddaae02978ad5a7b5198e13bc3adaa3cfa27db9552a23b00db36df2ba57d"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.218111 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.219750 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hgm9w" event={"ID":"4334e640-e3c2-4238-b7da-85e73bda80af","Type":"ContainerStarted","Data":"2ea139e003b79a475feee14a42275d9f7453eb9a9213db279e7f3471a5ff7868"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.219785 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hgm9w" event={"ID":"4334e640-e3c2-4238-b7da-85e73bda80af","Type":"ContainerStarted","Data":"6e063486fd7b148044d8682d6785ff2daefca61dffb3e8d16fef1c823e967646"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.223016 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" event={"ID":"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48","Type":"ContainerStarted","Data":"ff4310f7f3e5e9a2bd7e8ab4af2af7190f06dac0bc572790cf55ed3c145c3133"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.223072 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" event={"ID":"daf9c301-ff6e-47d9-a8a0-d88e6cf53d48","Type":"ContainerStarted","Data":"e523f03cf34fb250e7c923c4e51c4d22ee5c1f909f78da9e21089ceadcbd7bfc"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.225188 4979 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8pq8k container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.225247 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" podUID="de06742d-2533-4510-abec-ff0f35d84a45" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.245772 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.245936 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.745899128 +0000 UTC m=+144.707146171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.246209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.246659 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.746648999 +0000 UTC m=+144.707896022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.281749 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" event={"ID":"26cfd7ef-1024-479e-bdc5-e39429a16ee5","Type":"ContainerStarted","Data":"d77dcc7b8d4eda94c65546c40f447fa33c1a26c7778842c278f2dd62a625995a"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.287463 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.287538 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.282147 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" event={"ID":"26cfd7ef-1024-479e-bdc5-e39429a16ee5","Type":"ContainerStarted","Data":"e1843cd1c21062a77b4e66267aeb5c358c2f352021d86e7e0e02dc5a10c4056b"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.293266 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" event={"ID":"26cfd7ef-1024-479e-bdc5-e39429a16ee5","Type":"ContainerStarted","Data":"420f21b838d0534acddc58f9a6bf76f7eb31055abbdf16d4c574fa64c4292182"} Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.347223 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.349200 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.849181465 +0000 UTC m=+144.810428498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.356454 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7s7j" podStartSLOduration=124.356431447 podStartE2EDuration="2m4.356431447s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:28.340711641 +0000 UTC m=+144.301958674" watchObservedRunningTime="2026-01-30 21:42:28.356431447 +0000 UTC m=+144.317678480" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.449965 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.450542 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:28.95052541 +0000 UTC m=+144.911772443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.478214 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c"] Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.551932 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.552130 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.05209188 +0000 UTC m=+145.013338913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.552380 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.552437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.552551 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.552970 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.052948054 +0000 UTC m=+145.014195087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.553701 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"controller-manager-879f6c89f-4zkpx\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.561269 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.565250 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45cde1ce-04ec-4fdd-bfc0-10d072a9eff1-serving-cert\") pod \"console-operator-58897d9998-l44fm\" (UID: \"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1\") " pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.623308 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.623909 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.624613 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 21:37:27 +0000 UTC, rotation deadline is 2026-10-22 18:22:12.765122248 +0000 UTC Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.624671 4979 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6356h39m44.14045278s for next certificate rotation Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.633201 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.656173 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.656506 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.657450 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"console-f9d7485db-h6sv5\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.657513 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.157476186 +0000 UTC m=+145.118723229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.764502 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.773426 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.773948 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.273927088 +0000 UTC m=+145.235174121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.775761 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.875462 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.876139 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.376114915 +0000 UTC m=+145.337361948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.883365 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:28 crc kubenswrapper[4979]: I0130 21:42:28.977000 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:28 crc kubenswrapper[4979]: E0130 21:42:28.977497 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.47748029 +0000 UTC m=+145.438727323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.016251 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-mr5l2" podStartSLOduration=124.016223162 podStartE2EDuration="2m4.016223162s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.001240367 +0000 UTC m=+144.962487420" watchObservedRunningTime="2026-01-30 21:42:29.016223162 +0000 UTC m=+144.977470215" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.078120 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.078472 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.578447553 +0000 UTC m=+145.539694596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.078559 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.079156 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.579144143 +0000 UTC m=+145.540391186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.179911 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.180418 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.680383974 +0000 UTC m=+145.641630997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.186860 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.187417 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.687396878 +0000 UTC m=+145.648643911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.210772 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nm27z" podStartSLOduration=125.210749144 podStartE2EDuration="2m5.210749144s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.154484108 +0000 UTC m=+145.115731151" watchObservedRunningTime="2026-01-30 21:42:29.210749144 +0000 UTC m=+145.171996177" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.295965 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.296464 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.796424864 +0000 UTC m=+145.757671897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.296598 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.297048 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.797025381 +0000 UTC m=+145.758272414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.313389 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" event={"ID":"241b3d1c-56ec-4088-bcfa-bea0aecea050","Type":"ContainerStarted","Data":"f43255147e69e26f8a1fe665fac7bfec86a12ed20ef88126278fe472cc7b9de6"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.313481 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" event={"ID":"241b3d1c-56ec-4088-bcfa-bea0aecea050","Type":"ContainerStarted","Data":"0d26f89e2a172e315969f36b15e192c68b244c9952df301c48d811d675ba11ba"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.314432 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hwb2t" podStartSLOduration=124.314407232 podStartE2EDuration="2m4.314407232s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.311743268 +0000 UTC m=+145.272990321" watchObservedRunningTime="2026-01-30 21:42:29.314407232 +0000 UTC m=+145.275654265" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.316285 4979 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cxp2c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.316381 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" podUID="241b3d1c-56ec-4088-bcfa-bea0aecea050" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.316759 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.325310 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" event={"ID":"b43f94f0-791b-49cc-afe0-95ec18aa1f07","Type":"ContainerStarted","Data":"72cb010adee8d42eeef544e6077e19cc4bd21ebcf2f83845c5c858b217b33727"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.325388 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" event={"ID":"b43f94f0-791b-49cc-afe0-95ec18aa1f07","Type":"ContainerStarted","Data":"f9092fc40924a5c4c5ccda219effa1674a3cd66531deeb6ed63c03f809984b37"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.370634 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2zdrx" event={"ID":"f65257ab-42e6-4f77-ab65-f9f762c8ae42","Type":"ContainerStarted","Data":"5f161c0de437e434f21dee588b1079d8ddca7bea83a3a66cc1adaeb5a3bc615c"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.405558 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.406407 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:29.906382267 +0000 UTC m=+145.867629300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.408211 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" event={"ID":"0f9f4663-eacb-4b8f-b468-a1ee9e078f99","Type":"ContainerStarted","Data":"599addbaf40f39e79f4307282b574e1f3829e30d728a7056008b55d63b9a9a52"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.408297 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" event={"ID":"0f9f4663-eacb-4b8f-b468-a1ee9e078f99","Type":"ContainerStarted","Data":"c61ee2aa7afb4a607903ee7bd9b0998447f3a3c9928407c584bb9b4810e3a29a"} Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.508475 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.516934 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.016907594 +0000 UTC m=+145.978154627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.603022 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.611501 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.612216 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.112187231 +0000 UTC m=+146.073434264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.612879 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:29 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:29 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:29 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.612992 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.626403 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2zdrx" podStartSLOduration=5.626371344 podStartE2EDuration="5.626371344s" podCreationTimestamp="2026-01-30 21:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.621849788 +0000 UTC m=+145.583096821" watchObservedRunningTime="2026-01-30 21:42:29.626371344 +0000 UTC m=+145.587618377" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.680518 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" podStartSLOduration=125.6804695 podStartE2EDuration="2m5.6804695s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.672798628 +0000 UTC m=+145.634045661" watchObservedRunningTime="2026-01-30 21:42:29.6804695 +0000 UTC m=+145.641716523" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.713532 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.713915 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.213895175 +0000 UTC m=+146.175142198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.723835 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hgm9w" podStartSLOduration=124.723806439 podStartE2EDuration="2m4.723806439s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.720176449 +0000 UTC m=+145.681423482" watchObservedRunningTime="2026-01-30 21:42:29.723806439 +0000 UTC m=+145.685053482" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.753734 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffscn" podStartSLOduration=125.753708067 podStartE2EDuration="2m5.753708067s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.751571388 +0000 UTC m=+145.712818421" watchObservedRunningTime="2026-01-30 21:42:29.753708067 +0000 UTC m=+145.714955110" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.782725 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pj644" podStartSLOduration=124.782701119 podStartE2EDuration="2m4.782701119s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.781409233 +0000 UTC m=+145.742656276" watchObservedRunningTime="2026-01-30 21:42:29.782701119 +0000 UTC m=+145.743948152" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.814750 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.815357 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.315321912 +0000 UTC m=+146.276568945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.831002 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" podStartSLOduration=124.830981895 podStartE2EDuration="2m4.830981895s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.8297076 +0000 UTC m=+145.790954633" watchObservedRunningTime="2026-01-30 21:42:29.830981895 +0000 UTC m=+145.792228918" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.866545 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" podStartSLOduration=125.866528148 podStartE2EDuration="2m5.866528148s" podCreationTimestamp="2026-01-30 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.863994948 +0000 UTC m=+145.825241981" watchObservedRunningTime="2026-01-30 21:42:29.866528148 +0000 UTC m=+145.827775181" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.907324 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hm7cc" podStartSLOduration=124.907300166 podStartE2EDuration="2m4.907300166s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.906153304 +0000 UTC m=+145.867400347" watchObservedRunningTime="2026-01-30 21:42:29.907300166 +0000 UTC m=+145.868547189" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.918000 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:29 crc kubenswrapper[4979]: E0130 21:42:29.918481 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.418468865 +0000 UTC m=+146.379715898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.940192 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ww6sg" podStartSLOduration=124.940165816 podStartE2EDuration="2m4.940165816s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.939641531 +0000 UTC m=+145.900888554" watchObservedRunningTime="2026-01-30 21:42:29.940165816 +0000 UTC m=+145.901412849" Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.957987 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5"] Jan 30 21:42:29 crc kubenswrapper[4979]: I0130 21:42:29.964297 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.002934 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" podStartSLOduration=125.002901992 podStartE2EDuration="2m5.002901992s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:29.990308143 +0000 UTC m=+145.951555176" watchObservedRunningTime="2026-01-30 21:42:30.002901992 +0000 UTC m=+145.964149025" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.019373 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.019578 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.519545331 +0000 UTC m=+146.480792364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.019704 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.020827 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.520805056 +0000 UTC m=+146.482052089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.034024 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" podStartSLOduration=125.033993312 podStartE2EDuration="2m5.033993312s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:30.025422345 +0000 UTC m=+145.986669378" watchObservedRunningTime="2026-01-30 21:42:30.033993312 +0000 UTC m=+145.995240345" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.121813 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.122092 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.622067729 +0000 UTC m=+146.583314762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.122431 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.123507 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.623470678 +0000 UTC m=+146.584717711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.132179 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-464m7"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.202121 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.203683 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.219727 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.222661 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.224136 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.224303 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.724277757 +0000 UTC m=+146.685524790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.224388 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.224735 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.724728519 +0000 UTC m=+146.685975552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.242118 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjfp6"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.249053 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tbr4j"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.254342 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.314291 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.320492 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb"] Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.323068 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6952a3c6_a471_489c_ba9a_9e4b5e9ac362.slice/crio-359d3ae9d028909f4c19dd931fc01c85f187961f5954dde8fda33045e7e3f4ba WatchSource:0}: Error finding container 359d3ae9d028909f4c19dd931fc01c85f187961f5954dde8fda33045e7e3f4ba: Status 404 returned error can't find the container with id 359d3ae9d028909f4c19dd931fc01c85f187961f5954dde8fda33045e7e3f4ba Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.325663 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.326154 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.826134834 +0000 UTC m=+146.787381867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.356505 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.356978 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.369735 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.369784 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.370646 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2"] Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.378493 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7334e56_32c0_40f4_b60d_afab26024b6a.slice/crio-b1911dcd8dc7dd9b1dbc98801de3cb502058f2b2bb56e873a321a1a97e34ede9 WatchSource:0}: Error finding container b1911dcd8dc7dd9b1dbc98801de3cb502058f2b2bb56e873a321a1a97e34ede9: Status 404 returned error can't find the container with id b1911dcd8dc7dd9b1dbc98801de3cb502058f2b2bb56e873a321a1a97e34ede9 Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.382778 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.392398 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.402635 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.419296 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.421240 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-969ns"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.427543 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.428016 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:30.927995623 +0000 UTC m=+146.889242656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.429664 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f0c12f1_c780_4020_921b_11e410503db3.slice/crio-0fbda49922ac71a267fd10280d675bf0512ad25e0f9eacfbce54b1f9080d913a WatchSource:0}: Error finding container 0fbda49922ac71a267fd10280d675bf0512ad25e0f9eacfbce54b1f9080d913a: Status 404 returned error can't find the container with id 0fbda49922ac71a267fd10280d675bf0512ad25e0f9eacfbce54b1f9080d913a Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.438558 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.438966 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.469770 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" event={"ID":"7ad194c8-35db-4a68-9c59-575a8971d714","Type":"ContainerStarted","Data":"5ef9ea1a8b1714ad37c66c921f63b0c32a57051cb23f65b630ba25107f1ba693"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.491823 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.491906 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" event={"ID":"7638c8d5-0616-4612-9d15-7594e4f74184","Type":"ContainerStarted","Data":"fb302f7dcc4d9c0fce298ad934f5dba2ebf56dbb724e75125c2c1b0501f98e6b"} Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.504890 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod531bdeb2_b55c_4a3b_8fb5_1dca8478c479.slice/crio-95c0458ae28eb31ec71bdb02e60210790cc69fc574dd842485d70a015c01d44f WatchSource:0}: Error finding container 95c0458ae28eb31ec71bdb02e60210790cc69fc574dd842485d70a015c01d44f: Status 404 returned error can't find the container with id 95c0458ae28eb31ec71bdb02e60210790cc69fc574dd842485d70a015c01d44f Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.512692 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" event={"ID":"2063d8fc-0614-40e7-be84-ebfbda9acd89","Type":"ContainerStarted","Data":"39a7011e8f66e3083fbba9c04e7ba4c433ae4f32a904ca396f8fb210e2373cda"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.528533 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" event={"ID":"0f7429df-aeda-4c76-9051-401488358e6c","Type":"ContainerStarted","Data":"f3945d4121246c159397b7d4ada9093e0f33963deed010e1411b121d07437a1c"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.529318 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.529740 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.029722098 +0000 UTC m=+146.990969131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.535806 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" event={"ID":"6ebf43de-28a1-4cb6-a008-7bcc970b96ac","Type":"ContainerStarted","Data":"bdd825199390501468faf02b4ae1c5e76e7a754a355a385686ae77097aa84e4f"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.540094 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" event={"ID":"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564","Type":"ContainerStarted","Data":"f7169b345cdc05e29122d7abb87bd972d6e49b98973dcda6ae1ec411ac695143"} Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.551419 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf702c9e_2d17_476e_9bbe_d41784bf809b.slice/crio-285227df2f2809d7308d35741df5ee3baf68a1030cfbabe7e20409189841dab7 WatchSource:0}: Error finding container 285227df2f2809d7308d35741df5ee3baf68a1030cfbabe7e20409189841dab7: Status 404 returned error can't find the container with id 285227df2f2809d7308d35741df5ee3baf68a1030cfbabe7e20409189841dab7 Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.553134 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lbd69"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.569446 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" event={"ID":"38abc107-38ba-4e77-b00f-eece6eb28537","Type":"ContainerStarted","Data":"e6ea3e6c09d3bc197d04de59fd80b4e6ca76b1c9554cecc917347193af1dfdbf"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.574634 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:30 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:30 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:30 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.574726 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.576550 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" event={"ID":"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f","Type":"ContainerStarted","Data":"cd1146b8dfdad63d84be6913eeb3b6510467eb0c1ad861abd25f98ff51cc56fb"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.576605 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" event={"ID":"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f","Type":"ContainerStarted","Data":"03021ea95fef2aa2eca6cd5af517b1bd721a3b5fc1066c71f3b5ddcabfe8773a"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.579082 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" event={"ID":"e7334e56-32c0-40f4-b60d-afab26024b6a","Type":"ContainerStarted","Data":"b1911dcd8dc7dd9b1dbc98801de3cb502058f2b2bb56e873a321a1a97e34ede9"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.581014 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-464m7" event={"ID":"ebc2a677-6e7a-41ce-a3f4-063acddaa66b","Type":"ContainerStarted","Data":"072830c82b46453c7855f50c6e6f087a9cac16cd2584213d91cbbdf0bf5325a7"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.587341 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.588092 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.598105 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" event={"ID":"dda3a423-1b53-4e85-9ef1-123fe54ceb98","Type":"ContainerStarted","Data":"1b255e12b1ce330be94703262e51b96d275cbbe7182502b8b511460d87f0dbac"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.598182 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" event={"ID":"dda3a423-1b53-4e85-9ef1-123fe54ceb98","Type":"ContainerStarted","Data":"bf8643b5b61a4e7762bd6e45baf2c2788e0ee194d555a6f7af296867b9d36f21"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.601137 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" event={"ID":"6952a3c6-a471-489c-ba9a-9e4b5e9ac362","Type":"ContainerStarted","Data":"359d3ae9d028909f4c19dd931fc01c85f187961f5954dde8fda33045e7e3f4ba"} Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.607784 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-trsfj"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.622868 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zc7hq" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.637485 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.637958 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.137938372 +0000 UTC m=+147.099185405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.648790 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l44fm"] Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.689908 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cxp2c" Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.739221 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.741263 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.24123072 +0000 UTC m=+147.202477753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.842061 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.842962 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.342948845 +0000 UTC m=+147.304195878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: W0130 21:42:30.886769 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45cde1ce_04ec_4fdd_bfc0_10d072a9eff1.slice/crio-d37c5f724c00aa8dc23bb97b8f7b6c603468493b5ca7655a0f28578e807dbc4f WatchSource:0}: Error finding container d37c5f724c00aa8dc23bb97b8f7b6c603468493b5ca7655a0f28578e807dbc4f: Status 404 returned error can't find the container with id d37c5f724c00aa8dc23bb97b8f7b6c603468493b5ca7655a0f28578e807dbc4f Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.943211 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.943530 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.443488885 +0000 UTC m=+147.404735918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:30 crc kubenswrapper[4979]: I0130 21:42:30.943720 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:30 crc kubenswrapper[4979]: E0130 21:42:30.944112 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.444095583 +0000 UTC m=+147.405342616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.046692 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.047314 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.547289668 +0000 UTC m=+147.508536701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.095334 4979 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tdvvn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]log ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]etcd ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/max-in-flight-filter ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 21:42:31 crc kubenswrapper[4979]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 21:42:31 crc kubenswrapper[4979]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/openshift.io-startinformers ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 21:42:31 crc kubenswrapper[4979]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 21:42:31 crc kubenswrapper[4979]: livez check failed Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.095440 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" podUID="daf9c301-ff6e-47d9-a8a0-d88e6cf53d48" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.149528 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.150205 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.650167774 +0000 UTC m=+147.611414807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.251528 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.251738 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.751707564 +0000 UTC m=+147.712954597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.251811 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.252542 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.752534716 +0000 UTC m=+147.713781749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.353690 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.355147 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.854337134 +0000 UTC m=+147.815584167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.455844 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.456668 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:31.956640134 +0000 UTC m=+147.917887227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.560774 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.561298 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.061271769 +0000 UTC m=+148.022518802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.561444 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.562073 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.06206025 +0000 UTC m=+148.023307283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.567090 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:31 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:31 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:31 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.567301 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.629972 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-464m7" event={"ID":"ebc2a677-6e7a-41ce-a3f4-063acddaa66b","Type":"ContainerStarted","Data":"b3c026fa420e4606ac34e547a6b7a85a7e573cef9bcf034ca99faf6e1d1f8690"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.634995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lbd69" event={"ID":"7a7b036f-4e32-47e9-b700-da7ef3615e4f","Type":"ContainerStarted","Data":"bd0ce3b9147a1ce3bbccee527472379520f0e1932c76c405f3cb2ccafdfe4f23"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.638282 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" event={"ID":"82c82db9-e29a-4e8f-a5d0-04baf5a8c54f","Type":"ContainerStarted","Data":"9704274a3f33f8474fb59cb4da6e1481581b742d3433473db1ff59652cc6bad4"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.640213 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" event={"ID":"ff61cd4b-2b9f-4588-be96-10038ccc4a92","Type":"ContainerStarted","Data":"0e6f69cd4614a1bb62b39b70bbd49625b932e4c6dcb736053a2748eac81dda1e"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.640260 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" event={"ID":"ff61cd4b-2b9f-4588-be96-10038ccc4a92","Type":"ContainerStarted","Data":"2fdea5ec5c945a9b137321bd0204027de83c52d16c6cd7e9cca2d07e312e0fe5"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.641026 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" event={"ID":"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d","Type":"ContainerStarted","Data":"e21e27f1657aa3412b2bbaae9b4d978e7dfbbe471a5bcfabcfd9847fa5154869"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.647413 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" event={"ID":"ed73bac2-f781-4475-b265-8c8820d10e3b","Type":"ContainerStarted","Data":"cfbfda9a20adf2a5b922c67c6eee61e1ebeffd08961856293dc8c03148aa86f5"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.649343 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" event={"ID":"15489ac0-9ae3-4068-973c-fd1ea98642c3","Type":"ContainerStarted","Data":"585161ecfcfec9bab6e3f6343cc5b39fbcc29e68b0b21ee9c50d8350eb065d80"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.649378 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" event={"ID":"15489ac0-9ae3-4068-973c-fd1ea98642c3","Type":"ContainerStarted","Data":"77916c27a3bed0009808e06c73482e7ba563d922fb5c460a56269b992ef94952"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.650377 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.652644 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" event={"ID":"6952a3c6-a471-489c-ba9a-9e4b5e9ac362","Type":"ContainerStarted","Data":"2f767921d1a6fcfe8ec614441edeb634f4060c8dd04fa012ee77b523d248b6de"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.653374 4979 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4lzp5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.653380 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.653418 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.654628 4979 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6285m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.654660 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" podUID="6952a3c6-a471-489c-ba9a-9e4b5e9ac362" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.661597 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bkvc5" podStartSLOduration=126.661575874 podStartE2EDuration="2m6.661575874s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.658715125 +0000 UTC m=+147.619962158" watchObservedRunningTime="2026-01-30 21:42:31.661575874 +0000 UTC m=+147.622822907" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.663442 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" event={"ID":"6ebf43de-28a1-4cb6-a008-7bcc970b96ac","Type":"ContainerStarted","Data":"c7062503aa0d42950ff3ebc012cb84f3dee665b71c85823f80ea9ee149341f67"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.664435 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.665101 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.165082051 +0000 UTC m=+148.126329094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.675918 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" event={"ID":"f1ebd25b-fae4-4659-ab8c-e57b0e9d9564","Type":"ContainerStarted","Data":"cc423f640a6c1f728bdf896e80aa1e69eac802aa858ff5796ad035df2aaf7dc5"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.677253 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.685152 4979 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-j5jdh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.685214 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" podUID="f1ebd25b-fae4-4659-ab8c-e57b0e9d9564" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.696899 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" event={"ID":"d768fc5d-52c2-4901-a7cd-759d26f88251","Type":"ContainerStarted","Data":"e2e02e31f3aabd3d8a1cf93131c32a8e0598193e429437391912bab37c40db11"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.708893 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" podStartSLOduration=126.708873763 podStartE2EDuration="2m6.708873763s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.707461364 +0000 UTC m=+147.668708397" watchObservedRunningTime="2026-01-30 21:42:31.708873763 +0000 UTC m=+147.670120796" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.709016 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" podStartSLOduration=126.709009676 podStartE2EDuration="2m6.709009676s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.68490807 +0000 UTC m=+147.646155103" watchObservedRunningTime="2026-01-30 21:42:31.709009676 +0000 UTC m=+147.670256699" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.711499 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" event={"ID":"66910c2a-724c-42a8-8511-a8ee6de7d140","Type":"ContainerStarted","Data":"2187665181f7367677f2c1b881a03ee8da637754087e54f885ca01e2dc936f43"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.716686 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" event={"ID":"5ec159e5-6cc8-4130-a83c-ad402c63e175","Type":"ContainerStarted","Data":"fb7e4a3ec1ad847ba658d47ac8876c6f93045c61520a643b940a25449f568fab"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.718339 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" event={"ID":"df702c9e-2d17-476e-9bbe-d41784bf809b","Type":"ContainerStarted","Data":"285227df2f2809d7308d35741df5ee3baf68a1030cfbabe7e20409189841dab7"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.720005 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" event={"ID":"4f0c12f1-c780-4020-921b-11e410503db3","Type":"ContainerStarted","Data":"6e1e9e6deb3a154c5b70c3d0fc41ce67d8193ceed8421a0bd23df8f6bbefcf82"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.720054 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" event={"ID":"4f0c12f1-c780-4020-921b-11e410503db3","Type":"ContainerStarted","Data":"0fbda49922ac71a267fd10280d675bf0512ad25e0f9eacfbce54b1f9080d913a"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.721278 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" event={"ID":"531bdeb2-b55c-4a3b-8fb5-1dca8478c479","Type":"ContainerStarted","Data":"95c0458ae28eb31ec71bdb02e60210790cc69fc574dd842485d70a015c01d44f"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.722302 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h6sv5" event={"ID":"cc25d794-4ead-4436-a026-179f655c13d4","Type":"ContainerStarted","Data":"964c8b1ba5415a6ffab5411d004a571cd2b1dc55669379c6f25606fce00667e5"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.723153 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l44fm" event={"ID":"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1","Type":"ContainerStarted","Data":"d37c5f724c00aa8dc23bb97b8f7b6c603468493b5ca7655a0f28578e807dbc4f"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.726099 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" event={"ID":"dda3a423-1b53-4e85-9ef1-123fe54ceb98","Type":"ContainerStarted","Data":"b9a563be9831c29811a1e48898f52a6678ef7612f03e41eb2ca6c66ea2fba85a"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.731456 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" podStartSLOduration=126.731432867 podStartE2EDuration="2m6.731432867s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.727172089 +0000 UTC m=+147.688419122" watchObservedRunningTime="2026-01-30 21:42:31.731432867 +0000 UTC m=+147.692679900" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.733743 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" event={"ID":"7ad194c8-35db-4a68-9c59-575a8971d714","Type":"ContainerStarted","Data":"46f64f4bbb3ee52e01fe4fc6d1e4c9b080bec1744acc2f526ac779eea222e447"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.737270 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" event={"ID":"7638c8d5-0616-4612-9d15-7594e4f74184","Type":"ContainerStarted","Data":"b7fba9ff02b4535cb5aa87018eb11f5295e464a8f326227ae83662f7e182723e"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.739366 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" event={"ID":"38abc107-38ba-4e77-b00f-eece6eb28537","Type":"ContainerStarted","Data":"eb0fadc1ba1644ce574ed94626a635db9b6003b8cf16bb3ff670e8f86fc0cd06"} Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.747881 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rthrv" podStartSLOduration=126.747858541 podStartE2EDuration="2m6.747858541s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.747355778 +0000 UTC m=+147.708602811" watchObservedRunningTime="2026-01-30 21:42:31.747858541 +0000 UTC m=+147.709105574" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.766794 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.768384 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.268366059 +0000 UTC m=+148.229613092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.769640 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cckwg" podStartSLOduration=126.769578842 podStartE2EDuration="2m6.769578842s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.768300037 +0000 UTC m=+147.729547080" watchObservedRunningTime="2026-01-30 21:42:31.769578842 +0000 UTC m=+147.730825885" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.793783 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vcpm" podStartSLOduration=126.793760342 podStartE2EDuration="2m6.793760342s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.792227829 +0000 UTC m=+147.753474862" watchObservedRunningTime="2026-01-30 21:42:31.793760342 +0000 UTC m=+147.755007375" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.814634 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtsdv" podStartSLOduration=126.814613299 podStartE2EDuration="2m6.814613299s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.814545476 +0000 UTC m=+147.775792509" watchObservedRunningTime="2026-01-30 21:42:31.814613299 +0000 UTC m=+147.775860332" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.834626 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cjfp6" podStartSLOduration=126.834576531 podStartE2EDuration="2m6.834576531s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:31.833911312 +0000 UTC m=+147.795158345" watchObservedRunningTime="2026-01-30 21:42:31.834576531 +0000 UTC m=+147.795823564" Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.868695 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.870809 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.370768272 +0000 UTC m=+148.332015535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:31 crc kubenswrapper[4979]: I0130 21:42:31.971076 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:31 crc kubenswrapper[4979]: E0130 21:42:31.971485 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.471468129 +0000 UTC m=+148.432715162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.039948 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.040253 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.072643 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.072911 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.572865054 +0000 UTC m=+148.534112097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.073908 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.074388 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.574368065 +0000 UTC m=+148.535615098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.176950 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.178474 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.179399 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.179630 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.679593116 +0000 UTC m=+148.640840149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.179816 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.180307 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.680297196 +0000 UTC m=+148.641544229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.182681 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.193446 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.282165 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.283193 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rtgh\" (UniqueName: \"kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.283363 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.283632 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.285356 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.785299222 +0000 UTC m=+148.746546255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.361097 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.362319 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.364990 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.377508 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.397849 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.398549 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rtgh\" (UniqueName: \"kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.398580 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.398609 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.398686 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:32.898662708 +0000 UTC m=+148.859909741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.399362 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.399626 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.429154 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rtgh\" (UniqueName: \"kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh\") pod \"certified-operators-dk444\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.506463 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.506670 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.006633675 +0000 UTC m=+148.967880718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.506751 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.506807 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.506834 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.506883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snrx6\" (UniqueName: \"kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.507279 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.007266903 +0000 UTC m=+148.968513936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.570741 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.572464 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.573417 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:32 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:32 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:32 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.573460 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.575211 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.591639 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.609799 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.611616 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.611742 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snrx6\" (UniqueName: \"kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.611940 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.612639 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.612763 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.112741811 +0000 UTC m=+149.073988854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.613095 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.652229 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snrx6\" (UniqueName: \"kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6\") pod \"community-operators-krrkl\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.699130 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.715677 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.715757 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdhj\" (UniqueName: \"kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.715826 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.715950 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.716504 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.216485062 +0000 UTC m=+149.177732175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.762708 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" event={"ID":"66910c2a-724c-42a8-8511-a8ee6de7d140","Type":"ContainerStarted","Data":"f456ac2434eebd53eebc53e96333fc8771412d72ac2190c266bcbcce812eddf3"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.782517 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.785689 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.789734 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.793388 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" event={"ID":"4a58b6b9-d3d1-4b83-96b7-6ccbbc124a8d","Type":"ContainerStarted","Data":"61fc6a0f3fda396062e59232a12907a401403d66450e2ec645447a2e479e0077"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.794409 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-trsfj" podStartSLOduration=127.794386207 podStartE2EDuration="2m7.794386207s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:32.793250775 +0000 UTC m=+148.754497808" watchObservedRunningTime="2026-01-30 21:42:32.794386207 +0000 UTC m=+148.755633240" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.817426 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.818260 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.318227646 +0000 UTC m=+149.279474689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.830177 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.830361 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.830507 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.830576 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqdhj\" (UniqueName: \"kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.819139 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" event={"ID":"4f0c12f1-c780-4020-921b-11e410503db3","Type":"ContainerStarted","Data":"e42646c812528d15f2a790d8d81db7668ecda68e0f00345061af9f7e816e05ed"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.831589 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.831894 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.331883094 +0000 UTC m=+149.293130127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.832341 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.868463 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lbd69" event={"ID":"7a7b036f-4e32-47e9-b700-da7ef3615e4f","Type":"ContainerStarted","Data":"214b5add41548eca3427a4f06d3aa6644796d2a8a07422af3e97536ab39cff51"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.879241 4979 generic.go:334] "Generic (PLEG): container finished" podID="b43f94f0-791b-49cc-afe0-95ec18aa1f07" containerID="72cb010adee8d42eeef544e6077e19cc4bd21ebcf2f83845c5c858b217b33727" exitCode=0 Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.879374 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" event={"ID":"b43f94f0-791b-49cc-afe0-95ec18aa1f07","Type":"ContainerDied","Data":"72cb010adee8d42eeef544e6077e19cc4bd21ebcf2f83845c5c858b217b33727"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.908930 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" event={"ID":"e7334e56-32c0-40f4-b60d-afab26024b6a","Type":"ContainerStarted","Data":"c2d2e24f3144b21e04d2e032631ac585cd9982d28b6ed4f4ae367e274993d023"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.921124 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqdhj\" (UniqueName: \"kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj\") pod \"certified-operators-npfvh\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.929391 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9xcb5" podStartSLOduration=127.929368662 podStartE2EDuration="2m7.929368662s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:32.894632881 +0000 UTC m=+148.855879914" watchObservedRunningTime="2026-01-30 21:42:32.929368662 +0000 UTC m=+148.890615695" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.931675 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nzzr2" podStartSLOduration=127.931656795 podStartE2EDuration="2m7.931656795s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:32.927589422 +0000 UTC m=+148.888836455" watchObservedRunningTime="2026-01-30 21:42:32.931656795 +0000 UTC m=+148.892903828" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.932122 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h6sv5" event={"ID":"cc25d794-4ead-4436-a026-179f655c13d4","Type":"ContainerStarted","Data":"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.933748 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.935299 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.935590 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.935849 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.935890 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.935918 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.936114 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzvf\" (UniqueName: \"kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:32 crc kubenswrapper[4979]: E0130 21:42:32.936270 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.436248082 +0000 UTC m=+149.397495115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.942885 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l44fm" event={"ID":"45cde1ce-04ec-4fdd-bfc0-10d072a9eff1","Type":"ContainerStarted","Data":"13a51e61149fc5f98737cac1ce5720f99a121ed742b47872bf041837740284fa"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.944222 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.944633 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.952551 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.953440 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" event={"ID":"df702c9e-2d17-476e-9bbe-d41784bf809b","Type":"ContainerStarted","Data":"18b5e546aa644f863cdb950a2d9deb6a5a67659887d3244885a8dc6589b88ed4"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.966342 4979 patch_prober.go:28] interesting pod/console-operator-58897d9998-l44fm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.966439 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-l44fm" podUID="45cde1ce-04ec-4fdd-bfc0-10d072a9eff1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.967062 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" event={"ID":"531bdeb2-b55c-4a3b-8fb5-1dca8478c479","Type":"ContainerStarted","Data":"b376144d1e0d4b970f24cae31a0716b254f7d2db6ac1cd2a3feb27398072c429"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.979350 4979 generic.go:334] "Generic (PLEG): container finished" podID="d768fc5d-52c2-4901-a7cd-759d26f88251" containerID="7556cbd99f95ccceb7bf0d0fac7c1b3772888d4d04c3a35219fe9953959db2aa" exitCode=0 Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.979428 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" event={"ID":"d768fc5d-52c2-4901-a7cd-759d26f88251","Type":"ContainerDied","Data":"7556cbd99f95ccceb7bf0d0fac7c1b3772888d4d04c3a35219fe9953959db2aa"} Jan 30 21:42:32 crc kubenswrapper[4979]: I0130 21:42:32.988492 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" podStartSLOduration=127.988462347 podStartE2EDuration="2m7.988462347s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:32.987370286 +0000 UTC m=+148.948617339" watchObservedRunningTime="2026-01-30 21:42:32.988462347 +0000 UTC m=+148.949709380" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.006164 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lbd69" podStartSLOduration=9.006135116 podStartE2EDuration="9.006135116s" podCreationTimestamp="2026-01-30 21:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.005691553 +0000 UTC m=+148.966938596" watchObservedRunningTime="2026-01-30 21:42:33.006135116 +0000 UTC m=+148.967382149" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053611 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hzvf\" (UniqueName: \"kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053685 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.053733 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.054620 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" event={"ID":"2063d8fc-0614-40e7-be84-ebfbda9acd89","Type":"ContainerStarted","Data":"323a814f514a92a3735576883357a300caae64e4a2f5b2949c141b5734b46ee2"} Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.061111 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.064263 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.064723 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.070924 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.570902978 +0000 UTC m=+149.532150011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.074915 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" event={"ID":"5ec159e5-6cc8-4130-a83c-ad402c63e175","Type":"ContainerStarted","Data":"35565181a174192acb9291137cecf5a45f764a491611e072d2e49017c51cf2de"} Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.074980 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" event={"ID":"5ec159e5-6cc8-4130-a83c-ad402c63e175","Type":"ContainerStarted","Data":"96f32a15d80a8ac744380345ad4913ecb6cb35036100651001af43125a344cb9"} Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.084449 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-l44fm" podStartSLOduration=128.084415981 podStartE2EDuration="2m8.084415981s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.075720511 +0000 UTC m=+149.036967554" watchObservedRunningTime="2026-01-30 21:42:33.084415981 +0000 UTC m=+149.045663014" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.091549 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.093794 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.100217 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.108934 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.109009 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" event={"ID":"ed73bac2-f781-4475-b265-8c8820d10e3b","Type":"ContainerStarted","Data":"5f287c47abbe3b8b7dfd6f717716db44a0acfa82ad6eb5ab897e1a29ea1257c9"} Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.113462 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.143718 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-464m7" event={"ID":"ebc2a677-6e7a-41ce-a3f4-063acddaa66b","Type":"ContainerStarted","Data":"6cf9b18344b975c59991ebbe8d277764baa5dcef570d47a0584cd92ec3e70129"} Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.146978 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hzvf\" (UniqueName: \"kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf\") pod \"community-operators-454jj\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.147961 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-464m7" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.148004 4979 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6285m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.148059 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" podUID="6952a3c6-a471-489c-ba9a-9e4b5e9ac362" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.148109 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.148462 4979 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4lzp5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.148557 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.154876 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.155812 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.655788716 +0000 UTC m=+149.617035749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.184851 4979 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4zkpx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.184929 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.185233 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j5jdh" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.185185 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-h6sv5" podStartSLOduration=128.185158309 podStartE2EDuration="2m8.185158309s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.163048168 +0000 UTC m=+149.124295201" watchObservedRunningTime="2026-01-30 21:42:33.185158309 +0000 UTC m=+149.146405342" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.231716 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" podStartSLOduration=128.231686497 podStartE2EDuration="2m8.231686497s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.216925048 +0000 UTC m=+149.178172081" watchObservedRunningTime="2026-01-30 21:42:33.231686497 +0000 UTC m=+149.192933530" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.258618 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.262997 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.762975852 +0000 UTC m=+149.724222885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: W0130 21:42:33.285491 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ceea51c_f0b8_4de3_be53_f1d857b3a1b8.slice/crio-295443fe09756d263200da5b0351f58fb651db4b6823dfb3399c5cfb72b8ea20 WatchSource:0}: Error finding container 295443fe09756d263200da5b0351f58fb651db4b6823dfb3399c5cfb72b8ea20: Status 404 returned error can't find the container with id 295443fe09756d263200da5b0351f58fb651db4b6823dfb3399c5cfb72b8ea20 Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.285854 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.330944 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66dgs" podStartSLOduration=128.330915441 podStartE2EDuration="2m8.330915441s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.279083897 +0000 UTC m=+149.240330930" watchObservedRunningTime="2026-01-30 21:42:33.330915441 +0000 UTC m=+149.292162484" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.357647 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podStartSLOduration=128.357614541 podStartE2EDuration="2m8.357614541s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.328215928 +0000 UTC m=+149.289462991" watchObservedRunningTime="2026-01-30 21:42:33.357614541 +0000 UTC m=+149.318861574" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.360150 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.360764 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.860738967 +0000 UTC m=+149.821986000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.410150 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" podStartSLOduration=128.410125403 podStartE2EDuration="2m8.410125403s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.372636306 +0000 UTC m=+149.333883369" watchObservedRunningTime="2026-01-30 21:42:33.410125403 +0000 UTC m=+149.371372436" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.423595 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.456828 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpkqd" podStartSLOduration=128.456799134 podStartE2EDuration="2m8.456799134s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.438388115 +0000 UTC m=+149.399635178" watchObservedRunningTime="2026-01-30 21:42:33.456799134 +0000 UTC m=+149.418046167" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.461573 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.462150 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:33.962130242 +0000 UTC m=+149.923377275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.490072 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-464m7" podStartSLOduration=9.490048255 podStartE2EDuration="9.490048255s" podCreationTimestamp="2026-01-30 21:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.485562391 +0000 UTC m=+149.446809444" watchObservedRunningTime="2026-01-30 21:42:33.490048255 +0000 UTC m=+149.451295288" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.566260 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.566769 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.066746177 +0000 UTC m=+150.027993210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.573181 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-trhfm" podStartSLOduration=128.573128823 podStartE2EDuration="2m8.573128823s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:33.558611222 +0000 UTC m=+149.519858265" watchObservedRunningTime="2026-01-30 21:42:33.573128823 +0000 UTC m=+149.534375876" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.652184 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.668400 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.668822 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.168805111 +0000 UTC m=+150.130052144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.726113 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:33 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:33 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:33 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.726176 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.772814 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.773367 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.273331532 +0000 UTC m=+150.234578565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.887959 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.891628 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.391594775 +0000 UTC m=+150.352841808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.994047 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.994569 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.494534042 +0000 UTC m=+150.455781075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:33 crc kubenswrapper[4979]: I0130 21:42:33.994840 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:33 crc kubenswrapper[4979]: E0130 21:42:33.995275 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.495253533 +0000 UTC m=+150.456500586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.095968 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.096890 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.097303 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.597287226 +0000 UTC m=+150.558534259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.199153 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-969ns" event={"ID":"531bdeb2-b55c-4a3b-8fb5-1dca8478c479","Type":"ContainerStarted","Data":"fd0f76e38ac5b45b32d6f319fc6ab94eb270980e71cc410c3327587638df7123"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.210416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.210831 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.710815487 +0000 UTC m=+150.672062520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.227510 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" event={"ID":"d768fc5d-52c2-4901-a7cd-759d26f88251","Type":"ContainerStarted","Data":"93fb7b341146c630d112b7a2050a9dcbc16bec742e82a3ba83c50f275ec23952"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.228201 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.233312 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9ca1d9085d56c9080a077e82122e2f38cba69a3090232fc00cc7acb0b68a10c4"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.259428 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerStarted","Data":"295443fe09756d263200da5b0351f58fb651db4b6823dfb3399c5cfb72b8ea20"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.268149 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" podStartSLOduration=129.268125463 podStartE2EDuration="2m9.268125463s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:34.265832309 +0000 UTC m=+150.227079342" watchObservedRunningTime="2026-01-30 21:42:34.268125463 +0000 UTC m=+150.229372496" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.279413 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-s86jb" event={"ID":"e7334e56-32c0-40f4-b60d-afab26024b6a","Type":"ContainerStarted","Data":"cb7d83055f15431e65b94ffbcef8b7f017093ccde5577adad7fa2c1ba83772fb"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.321916 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" event={"ID":"0f7429df-aeda-4c76-9051-401488358e6c","Type":"ContainerStarted","Data":"1b3e92d4f597c50514a726bce1f0da466e1e891ed10de47ef04906e7508ae0f8"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.326127 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.326217 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.826197919 +0000 UTC m=+150.787444952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.344705 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.346810 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.846777939 +0000 UTC m=+150.808024972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.374678 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerStarted","Data":"9e701107804895c162dc5dbfb55c5fb4850bb1995cf07bbee85bb8f8a3ce5a6f"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.396473 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerStarted","Data":"ef80ed7d6ea466150a57b7d4595c84c46d03f43e54dcb40334059a4c99c74be3"} Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.398663 4979 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4lzp5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.398740 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.405022 4979 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4zkpx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.405138 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.406222 4979 patch_prober.go:28] interesting pod/console-operator-58897d9998-l44fm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.406528 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-l44fm" podUID="45cde1ce-04ec-4fdd-bfc0-10d072a9eff1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.426750 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.447338 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.449558 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:34.949510551 +0000 UTC m=+150.910757584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.527958 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.529730 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.536154 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.550004 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.563277 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.567171 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.067149406 +0000 UTC m=+151.028396439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.573131 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.574301 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:34 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:34 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:34 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.574521 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.623160 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6285m" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.662754 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.663193 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.163165393 +0000 UTC m=+151.124412426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.663244 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nls66\" (UniqueName: \"kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.663468 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.663503 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.764899 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nls66\" (UniqueName: \"kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.764981 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.765008 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.765067 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.765539 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.265517974 +0000 UTC m=+151.226765007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.766418 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.766658 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.786594 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.787960 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.836465 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nls66\" (UniqueName: \"kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66\") pod \"redhat-marketplace-wjwlb\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.855929 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.867080 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.867460 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.367438525 +0000 UTC m=+151.328685558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.871126 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.972222 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqrjw\" (UniqueName: \"kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.972269 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.972290 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:34 crc kubenswrapper[4979]: I0130 21:42:34.972327 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:34 crc kubenswrapper[4979]: E0130 21:42:34.972736 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.472717967 +0000 UTC m=+151.433965000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.073382 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.073508 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.573479045 +0000 UTC m=+151.534726078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.074014 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqrjw\" (UniqueName: \"kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.074061 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.074082 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.074110 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.074402 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.574386301 +0000 UTC m=+151.535633334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.075290 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.075355 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.121365 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqrjw\" (UniqueName: \"kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw\") pod \"redhat-marketplace-qmzzl\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.153448 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.177377 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.177834 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.677808642 +0000 UTC m=+151.639055675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.227218 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.283654 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.284548 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.784533495 +0000 UTC m=+151.745780528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.386468 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.386745 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume\") pod \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.386831 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume\") pod \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.386905 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b7tr\" (UniqueName: \"kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr\") pod \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\" (UID: \"b43f94f0-791b-49cc-afe0-95ec18aa1f07\") " Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.386975 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.886937938 +0000 UTC m=+151.848184971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.387338 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.387913 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.887892205 +0000 UTC m=+151.849139238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.389575 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.395839 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume" (OuterVolumeSpecName: "config-volume") pod "b43f94f0-791b-49cc-afe0-95ec18aa1f07" (UID: "b43f94f0-791b-49cc-afe0-95ec18aa1f07"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.405522 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tdvvn" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.408778 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.409660 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43f94f0-791b-49cc-afe0-95ec18aa1f07" containerName="collect-profiles" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.409746 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43f94f0-791b-49cc-afe0-95ec18aa1f07" containerName="collect-profiles" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.410487 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43f94f0-791b-49cc-afe0-95ec18aa1f07" containerName="collect-profiles" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.414013 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.430258 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr" (OuterVolumeSpecName: "kube-api-access-2b7tr") pod "b43f94f0-791b-49cc-afe0-95ec18aa1f07" (UID: "b43f94f0-791b-49cc-afe0-95ec18aa1f07"). InnerVolumeSpecName "kube-api-access-2b7tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.435673 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.439107 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.442468 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b43f94f0-791b-49cc-afe0-95ec18aa1f07" (UID: "b43f94f0-791b-49cc-afe0-95ec18aa1f07"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.451567 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.457792 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.457866 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.458267 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.458445 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.493355 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.494245 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b43f94f0-791b-49cc-afe0-95ec18aa1f07-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.494352 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43f94f0-791b-49cc-afe0-95ec18aa1f07-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.494437 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b7tr\" (UniqueName: \"kubernetes.io/projected/b43f94f0-791b-49cc-afe0-95ec18aa1f07-kube-api-access-2b7tr\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.494636 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:35.994613208 +0000 UTC m=+151.955860241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.499152 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"69a009f35ded371aabf5b7792a76efbd01c8b7cef3c7f0785e7abc9f88921676"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.499282 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"126421ffab26a0306583a4d4b26dc1a88feae26774f51960306aee1d9d068837"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.500854 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.517361 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" event={"ID":"b43f94f0-791b-49cc-afe0-95ec18aa1f07","Type":"ContainerDied","Data":"f9092fc40924a5c4c5ccda219effa1674a3cd66531deeb6ed63c03f809984b37"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.517413 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9092fc40924a5c4c5ccda219effa1674a3cd66531deeb6ed63c03f809984b37" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.517569 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.548403 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"31bfef1cf6782c630454f26fc196708d03ea5fd5e3bc34fe717e150e46e5924b"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.558811 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerID="d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1" exitCode=0 Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.558953 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerDied","Data":"d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.577348 4979 generic.go:334] "Generic (PLEG): container finished" podID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerID="82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab" exitCode=0 Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.577466 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerDied","Data":"82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.578558 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:35 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:35 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:35 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.578595 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.587441 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.588818 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.603160 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.603250 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmtm\" (UniqueName: \"kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.603387 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.603441 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.604507 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.104479947 +0000 UTC m=+152.065727180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.605456 4979 generic.go:334] "Generic (PLEG): container finished" podID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerID="ac193c08f8b37b1caaa0e8f2fd6642d2080bfcadd0f1988fbb608a5fad551f06" exitCode=0 Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.605586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerDied","Data":"ac193c08f8b37b1caaa0e8f2fd6642d2080bfcadd0f1988fbb608a5fad551f06"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.657196 4979 generic.go:334] "Generic (PLEG): container finished" podID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerID="bf235c47905ef6c38fcc7f3601d64c6f0ba215a6796ab2b1da97239f211b40de" exitCode=0 Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.657276 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerDied","Data":"bf235c47905ef6c38fcc7f3601d64c6f0ba215a6796ab2b1da97239f211b40de"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.657306 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerStarted","Data":"897e930b920945770fe85e65189da3f41f538afe25ecb7f6857d9256eed7d54a"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.670788 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.672411 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"68ecc18b7726954e2cba0cf9dcf99d9f08243cc72271afa623e628583a74076f"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.672463 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"766900bad4d97d184ca34531072e3c7ee0435fdc17acce2b96276917ce8decc7"} Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.704690 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705368 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705434 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705477 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705531 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmmtm\" (UniqueName: \"kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705572 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.705605 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7xc\" (UniqueName: \"kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.706369 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.206329845 +0000 UTC m=+152.167576888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.706873 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.706947 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.780557 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmmtm\" (UniqueName: \"kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm\") pod \"redhat-operators-2tvd8\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.810118 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.810178 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc7xc\" (UniqueName: \"kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.810292 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.810317 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.810692 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.310675083 +0000 UTC m=+152.271922116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.814316 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.814538 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.818098 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.897501 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc7xc\" (UniqueName: \"kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc\") pod \"redhat-operators-sg6j7\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.917570 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:35 crc kubenswrapper[4979]: E0130 21:42:35.918099 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.418061883 +0000 UTC m=+152.379308916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:35 crc kubenswrapper[4979]: I0130 21:42:35.992619 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.019521 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.019971 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.519949702 +0000 UTC m=+152.481196735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.129817 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.129918 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.629890395 +0000 UTC m=+152.591137428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.141199 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.141693 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.641677351 +0000 UTC m=+152.602924384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.179337 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-l44fm" Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.242887 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.243417 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.743392465 +0000 UTC m=+152.704639498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.292336 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.344859 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.345277 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.845263493 +0000 UTC m=+152.806510526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.446610 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.446880 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.946833844 +0000 UTC m=+152.908080887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.447462 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.447856 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:36.947840992 +0000 UTC m=+152.909088025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.548341 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.548473 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.048452706 +0000 UTC m=+153.009699739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.548769 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.549087 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.049080523 +0000 UTC m=+153.010327556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.577201 4979 patch_prober.go:28] interesting pod/router-default-5444994796-hgm9w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:42:36 crc kubenswrapper[4979]: [-]has-synced failed: reason withheld Jan 30 21:42:36 crc kubenswrapper[4979]: [+]process-running ok Jan 30 21:42:36 crc kubenswrapper[4979]: healthz check failed Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.577270 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hgm9w" podUID="4334e640-e3c2-4238-b7da-85e73bda80af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.650051 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.650539 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.150516339 +0000 UTC m=+153.111763372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.704788 4979 generic.go:334] "Generic (PLEG): container finished" podID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerID="79a85f996439ff844121a3f1030805086e2c3395fd9f9a97d7660f7b7319ecdd" exitCode=0 Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.705315 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerDied","Data":"79a85f996439ff844121a3f1030805086e2c3395fd9f9a97d7660f7b7319ecdd"} Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.705355 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerStarted","Data":"69b34253c166acfc981a0414523d053e63aae7c6e06110f5fe68cf8028008964"} Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.729492 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerStarted","Data":"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065"} Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.729541 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerStarted","Data":"4c62920e03a89d4d5765a230e2b55c002afe184d080ace3bcaa5b06f8f97c1f4"} Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.757514 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.757907 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.25789316 +0000 UTC m=+153.219140193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.858893 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.859655 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.359633675 +0000 UTC m=+153.320880708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.859985 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.872135 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.372120291 +0000 UTC m=+153.333367324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.890888 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dqtmx" Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.923923 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.945998 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:42:36 crc kubenswrapper[4979]: I0130 21:42:36.962201 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:36 crc kubenswrapper[4979]: E0130 21:42:36.962736 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.462717407 +0000 UTC m=+153.423964440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.063781 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.064178 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.564164824 +0000 UTC m=+153.525411857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.165166 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.165752 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.665732094 +0000 UTC m=+153.626979127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.266761 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.267165 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.767148801 +0000 UTC m=+153.728395834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.368133 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.368268 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.868245268 +0000 UTC m=+153.829492301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.368850 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.369247 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.869237135 +0000 UTC m=+153.830484168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.470253 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.470693 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:37.970667612 +0000 UTC m=+153.931914645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.563013 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.569251 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.572298 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.572771 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.072752206 +0000 UTC m=+154.033999239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.652829 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.653708 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.675105 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.677324 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.177278779 +0000 UTC m=+154.138525812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.682877 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.683214 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.694567 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.779600 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.779727 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.779760 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.780263 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.280239067 +0000 UTC m=+154.241486090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.784369 4979 generic.go:334] "Generic (PLEG): container finished" podID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerID="3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065" exitCode=0 Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.784539 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerDied","Data":"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065"} Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.807469 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" event={"ID":"0f7429df-aeda-4c76-9051-401488358e6c","Type":"ContainerStarted","Data":"ef234ccd91820f9c9ec287127934f83d0c0b7196cf7358d463dd2dca8996f477"} Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.810939 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerStarted","Data":"87982f21eeaee850aff8e29886551952617d82411b159837b48e46f7e706dfb9"} Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.837720 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerStarted","Data":"2225585b885540daf5c8798c55ba2f9f3246f245430840cea94336a10b265b9b"} Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.847922 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hgm9w" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.888171 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.888391 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.888417 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.888910 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.388890533 +0000 UTC m=+154.350137566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.890240 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.917408 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:37 crc kubenswrapper[4979]: I0130 21:42:37.989530 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:37 crc kubenswrapper[4979]: E0130 21:42:37.990045 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.490007351 +0000 UTC m=+154.451254384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.001629 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.057241 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.097629 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.097971 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.597944477 +0000 UTC m=+154.559191510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.098024 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.098473 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.598463822 +0000 UTC m=+154.559710855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.201684 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.201919 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.701876753 +0000 UTC m=+154.663123796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.201972 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.202497 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.70247775 +0000 UTC m=+154.663724783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.304138 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.304945 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.804922104 +0000 UTC m=+154.766169137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.408231 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.408710 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:38.908685625 +0000 UTC m=+154.869932658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.511158 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.511700 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.011649764 +0000 UTC m=+154.972896797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.561783 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.614415 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.614793 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.114779797 +0000 UTC m=+155.076026830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.662436 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.715211 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.716618 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.216600945 +0000 UTC m=+155.177847978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.819302 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.819691 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.319677627 +0000 UTC m=+155.280924660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.892465 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.892523 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.903695 4979 patch_prober.go:28] interesting pod/console-f9d7485db-h6sv5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.904714 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-h6sv5" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.914538 4979 generic.go:334] "Generic (PLEG): container finished" podID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerID="f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c" exitCode=0 Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.914669 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerDied","Data":"f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c"} Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.921249 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.921745 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.42171747 +0000 UTC m=+155.382964503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.921848 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:38 crc kubenswrapper[4979]: E0130 21:42:38.926997 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.426981685 +0000 UTC m=+155.388228718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:38 crc kubenswrapper[4979]: I0130 21:42:38.998490 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7","Type":"ContainerStarted","Data":"c55632324b36c1cf998f1fe1dace9e343e9837a67e54aa2706722b006b062334"} Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.027743 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.028281 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.528258308 +0000 UTC m=+155.489505331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.129589 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.130068 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.630048464 +0000 UTC m=+155.591295497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.130869 4979 generic.go:334] "Generic (PLEG): container finished" podID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerID="7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d" exitCode=0 Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.134081 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerDied","Data":"7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d"} Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.231172 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.231282 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.731258135 +0000 UTC m=+155.692505168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.231722 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.244593 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.744567082 +0000 UTC m=+155.705814115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.332803 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.333194 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.833172784 +0000 UTC m=+155.794419817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.434223 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.434708 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:39.934691353 +0000 UTC m=+155.895938386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.535373 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.536247 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.036225522 +0000 UTC m=+155.997472555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.639634 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.640253 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.140174729 +0000 UTC m=+156.101421762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.743225 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.744268 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.243780345 +0000 UTC m=+156.205027378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.790563 4979 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.845588 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.846323 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.346295321 +0000 UTC m=+156.307542354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:39 crc kubenswrapper[4979]: I0130 21:42:39.946962 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:39 crc kubenswrapper[4979]: E0130 21:42:39.949055 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.449015263 +0000 UTC m=+156.410262296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.048936 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: E0130 21:42:40.049351 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.549335799 +0000 UTC m=+156.510582832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.151626 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:40 crc kubenswrapper[4979]: E0130 21:42:40.151842 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.651811424 +0000 UTC m=+156.613058457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.152249 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: E0130 21:42:40.153167 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.653149721 +0000 UTC m=+156.614396754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.162294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" event={"ID":"0f7429df-aeda-4c76-9051-401488358e6c","Type":"ContainerStarted","Data":"5ec8c9bf60a3611a420968ee95c2d6718cfea93e6bf42a3a9fef27d42b1287e6"} Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.252826 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:40 crc kubenswrapper[4979]: E0130 21:42:40.253066 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.753012134 +0000 UTC m=+156.714259167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.253318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: E0130 21:42:40.254661 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:42:40.75465192 +0000 UTC m=+156.715898953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rvdlc" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.285617 4979 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T21:42:39.790617051Z","Handler":null,"Name":""} Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.293135 4979 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.293186 4979 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.354077 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.358682 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.455705 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.489587 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.489740 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.537853 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rvdlc\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:40 crc kubenswrapper[4979]: I0130 21:42:40.724308 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.079635 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.201134 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7","Type":"ContainerStarted","Data":"ef2d34dbe946d828987a41a728d8d2e42578f678fbc80dd5cead01215db34bdf"} Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.214640 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.217233 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" event={"ID":"0f7429df-aeda-4c76-9051-401488358e6c","Type":"ContainerStarted","Data":"4c74a85b0c156946a2d7c22e86e38054a855adbf4c322731e01b031c8aec76ec"} Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.260084 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tbr4j" podStartSLOduration=17.260056468 podStartE2EDuration="17.260056468s" podCreationTimestamp="2026-01-30 21:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:41.258620438 +0000 UTC m=+157.219867471" watchObservedRunningTime="2026-01-30 21:42:41.260056468 +0000 UTC m=+157.221303501" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.264347 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.264322196 podStartE2EDuration="4.264322196s" podCreationTimestamp="2026-01-30 21:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:41.227904588 +0000 UTC m=+157.189151651" watchObservedRunningTime="2026-01-30 21:42:41.264322196 +0000 UTC m=+157.225569229" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.490713 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.497470 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.499249 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.503391 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.503685 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.572911 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.573070 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.675210 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.675323 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.675664 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.708115 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:41 crc kubenswrapper[4979]: I0130 21:42:41.815867 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.311456 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" event={"ID":"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08","Type":"ContainerStarted","Data":"772ed6de3e14868a31eee279f850d2d08ee72d544656a44996cff23085c636cb"} Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.311832 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" event={"ID":"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08","Type":"ContainerStarted","Data":"7b232422461df3a64ba9f7d1e8e42a5bbd92a1d12e44b90cbcab93e3d93f6389"} Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.311854 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.335982 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.350247 4979 generic.go:334] "Generic (PLEG): container finished" podID="42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" containerID="ef2d34dbe946d828987a41a728d8d2e42578f678fbc80dd5cead01215db34bdf" exitCode=0 Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.350349 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7","Type":"ContainerDied","Data":"ef2d34dbe946d828987a41a728d8d2e42578f678fbc80dd5cead01215db34bdf"} Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.357849 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" podStartSLOduration=137.357828241 podStartE2EDuration="2m17.357828241s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:42.3545407 +0000 UTC m=+158.315787723" watchObservedRunningTime="2026-01-30 21:42:42.357828241 +0000 UTC m=+158.319075274" Jan 30 21:42:42 crc kubenswrapper[4979]: W0130 21:42:42.363562 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod55091f68_f13e_49c0_9b8a_3285b7eddb4b.slice/crio-633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97 WatchSource:0}: Error finding container 633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97: Status 404 returned error can't find the container with id 633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97 Jan 30 21:42:42 crc kubenswrapper[4979]: I0130 21:42:42.635489 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-464m7" Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.382993 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55091f68-f13e-49c0-9b8a-3285b7eddb4b","Type":"ContainerStarted","Data":"633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97"} Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.717704 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.831152 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access\") pod \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.831259 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir\") pod \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\" (UID: \"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7\") " Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.831416 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" (UID: "42ef219c-4a0f-4fba-8bc4-6fa51bc996f7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.831895 4979 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.838680 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" (UID: "42ef219c-4a0f-4fba-8bc4-6fa51bc996f7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:43 crc kubenswrapper[4979]: I0130 21:42:43.933531 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ef219c-4a0f-4fba-8bc4-6fa51bc996f7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:44 crc kubenswrapper[4979]: I0130 21:42:44.404941 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:42:44 crc kubenswrapper[4979]: I0130 21:42:44.404929 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"42ef219c-4a0f-4fba-8bc4-6fa51bc996f7","Type":"ContainerDied","Data":"c55632324b36c1cf998f1fe1dace9e343e9837a67e54aa2706722b006b062334"} Jan 30 21:42:44 crc kubenswrapper[4979]: I0130 21:42:44.405657 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c55632324b36c1cf998f1fe1dace9e343e9837a67e54aa2706722b006b062334" Jan 30 21:42:44 crc kubenswrapper[4979]: I0130 21:42:44.408722 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55091f68-f13e-49c0-9b8a-3285b7eddb4b","Type":"ContainerStarted","Data":"60c8506ebd3462a1feb71dee8fcf85525328c298ce8cf6229738956b544217d6"} Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.438323 4979 generic.go:334] "Generic (PLEG): container finished" podID="55091f68-f13e-49c0-9b8a-3285b7eddb4b" containerID="60c8506ebd3462a1feb71dee8fcf85525328c298ce8cf6229738956b544217d6" exitCode=0 Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.438736 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55091f68-f13e-49c0-9b8a-3285b7eddb4b","Type":"ContainerDied","Data":"60c8506ebd3462a1feb71dee8fcf85525328c298ce8cf6229738956b544217d6"} Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.443713 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.443822 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.443834 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:45 crc kubenswrapper[4979]: I0130 21:42:45.443933 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:48 crc kubenswrapper[4979]: I0130 21:42:48.847636 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:48 crc kubenswrapper[4979]: I0130 21:42:48.857099 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0632938-c88a-4c22-b0e7-8f7473532f07-metrics-certs\") pod \"network-metrics-daemon-pk47q\" (UID: \"d0632938-c88a-4c22-b0e7-8f7473532f07\") " pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:48 crc kubenswrapper[4979]: I0130 21:42:48.890806 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:48 crc kubenswrapper[4979]: I0130 21:42:48.895620 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:42:49 crc kubenswrapper[4979]: I0130 21:42:49.086019 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pk47q" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.003565 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.093674 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir\") pod \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.093917 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access\") pod \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\" (UID: \"55091f68-f13e-49c0-9b8a-3285b7eddb4b\") " Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.096770 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "55091f68-f13e-49c0-9b8a-3285b7eddb4b" (UID: "55091f68-f13e-49c0-9b8a-3285b7eddb4b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.100304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "55091f68-f13e-49c0-9b8a-3285b7eddb4b" (UID: "55091f68-f13e-49c0-9b8a-3285b7eddb4b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.195125 4979 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.195171 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55091f68-f13e-49c0-9b8a-3285b7eddb4b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.481780 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"55091f68-f13e-49c0-9b8a-3285b7eddb4b","Type":"ContainerDied","Data":"633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97"} Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.481836 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="633b0da5f21a25bcf04c6556d01d9daedff23f08899b1685b57753da97f54c97" Jan 30 21:42:50 crc kubenswrapper[4979]: I0130 21:42:50.481913 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.445397 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.445488 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.446002 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.446130 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.446221 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.446893 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.446963 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.447292 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"1fb05cef810c91cb605dd4c3bc4b66f2e11e171e5bc7b3102d68194e8af8b49d"} pod="openshift-console/downloads-7954f5f757-hwb2t" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 21:42:55 crc kubenswrapper[4979]: I0130 21:42:55.447406 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" containerID="cri-o://1fb05cef810c91cb605dd4c3bc4b66f2e11e171e5bc7b3102d68194e8af8b49d" gracePeriod=2 Jan 30 21:42:56 crc kubenswrapper[4979]: I0130 21:42:56.653074 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:42:56 crc kubenswrapper[4979]: I0130 21:42:56.653383 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" containerID="cri-o://0e6f69cd4614a1bb62b39b70bbd49625b932e4c6dcb736053a2748eac81dda1e" gracePeriod=30 Jan 30 21:42:56 crc kubenswrapper[4979]: I0130 21:42:56.663324 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:42:56 crc kubenswrapper[4979]: I0130 21:42:56.663637 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" containerID="cri-o://1a95ca4d3d52fa45ac0c03598e04f51654e2ae85b01f82e3a46a20846a9d630c" gracePeriod=30 Jan 30 21:42:57 crc kubenswrapper[4979]: I0130 21:42:57.787980 4979 generic.go:334] "Generic (PLEG): container finished" podID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerID="1fb05cef810c91cb605dd4c3bc4b66f2e11e171e5bc7b3102d68194e8af8b49d" exitCode=0 Jan 30 21:42:57 crc kubenswrapper[4979]: I0130 21:42:57.788098 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hwb2t" event={"ID":"21b53e08-d25e-41ab-a180-4b852eb77c8c","Type":"ContainerDied","Data":"1fb05cef810c91cb605dd4c3bc4b66f2e11e171e5bc7b3102d68194e8af8b49d"} Jan 30 21:42:57 crc kubenswrapper[4979]: I0130 21:42:57.790429 4979 generic.go:334] "Generic (PLEG): container finished" podID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerID="0e6f69cd4614a1bb62b39b70bbd49625b932e4c6dcb736053a2748eac81dda1e" exitCode=0 Jan 30 21:42:57 crc kubenswrapper[4979]: I0130 21:42:57.790480 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" event={"ID":"ff61cd4b-2b9f-4588-be96-10038ccc4a92","Type":"ContainerDied","Data":"0e6f69cd4614a1bb62b39b70bbd49625b932e4c6dcb736053a2748eac81dda1e"} Jan 30 21:42:58 crc kubenswrapper[4979]: I0130 21:42:58.635013 4979 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4zkpx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 21:42:58 crc kubenswrapper[4979]: I0130 21:42:58.635114 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 21:42:59 crc kubenswrapper[4979]: I0130 21:42:59.802975 4979 generic.go:334] "Generic (PLEG): container finished" podID="828e6466-447a-47f9-9727-3992db7c27c9" containerID="1a95ca4d3d52fa45ac0c03598e04f51654e2ae85b01f82e3a46a20846a9d630c" exitCode=0 Jan 30 21:42:59 crc kubenswrapper[4979]: I0130 21:42:59.803024 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" event={"ID":"828e6466-447a-47f9-9727-3992db7c27c9","Type":"ContainerDied","Data":"1a95ca4d3d52fa45ac0c03598e04f51654e2ae85b01f82e3a46a20846a9d630c"} Jan 30 21:43:00 crc kubenswrapper[4979]: I0130 21:43:00.732624 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:43:02 crc kubenswrapper[4979]: I0130 21:43:02.040423 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:43:02 crc kubenswrapper[4979]: I0130 21:43:02.041228 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:43:05 crc kubenswrapper[4979]: I0130 21:43:05.445220 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:05 crc kubenswrapper[4979]: I0130 21:43:05.445731 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:06 crc kubenswrapper[4979]: I0130 21:43:06.275563 4979 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-x8j5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 30 21:43:06 crc kubenswrapper[4979]: I0130 21:43:06.275682 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 30 21:43:07 crc kubenswrapper[4979]: I0130 21:43:07.951351 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d8kf5" Jan 30 21:43:08 crc kubenswrapper[4979]: I0130 21:43:08.635744 4979 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4zkpx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 21:43:08 crc kubenswrapper[4979]: I0130 21:43:08.635830 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 21:43:13 crc kubenswrapper[4979]: I0130 21:43:13.212222 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:43:15 crc kubenswrapper[4979]: I0130 21:43:15.443188 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:15 crc kubenswrapper[4979]: I0130 21:43:15.443658 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:17 crc kubenswrapper[4979]: I0130 21:43:17.275495 4979 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-x8j5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:43:17 crc kubenswrapper[4979]: I0130 21:43:17.275586 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:43:18 crc kubenswrapper[4979]: E0130 21:43:18.290397 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 21:43:18 crc kubenswrapper[4979]: E0130 21:43:18.290963 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nls66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wjwlb_openshift-marketplace(cfb214a7-6df6-4fd6-a74c-db4f38b0a086): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:18 crc kubenswrapper[4979]: E0130 21:43:18.292491 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wjwlb" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.866602 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:43:18 crc kubenswrapper[4979]: E0130 21:43:18.866902 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.866922 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: E0130 21:43:18.866946 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55091f68-f13e-49c0-9b8a-3285b7eddb4b" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.866955 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="55091f68-f13e-49c0-9b8a-3285b7eddb4b" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.867101 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ef219c-4a0f-4fba-8bc4-6fa51bc996f7" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.867119 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="55091f68-f13e-49c0-9b8a-3285b7eddb4b" containerName="pruner" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.867642 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.871731 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.871905 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.881733 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.996948 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:18 crc kubenswrapper[4979]: I0130 21:43:18.997078 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.098419 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.098529 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.098655 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.123213 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.195968 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:19 crc kubenswrapper[4979]: E0130 21:43:19.533637 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wjwlb" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" Jan 30 21:43:19 crc kubenswrapper[4979]: E0130 21:43:19.540508 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3114423416/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 21:43:19 crc kubenswrapper[4979]: E0130 21:43:19.540867 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqrjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qmzzl_openshift-marketplace(2b857a3f-c3a5-4851-ba1e-25d9dbc64de5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3114423416/2\": happened during read: context canceled" logger="UnhandledError" Jan 30 21:43:19 crc kubenswrapper[4979]: E0130 21:43:19.542650 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage3114423416/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qmzzl" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.636106 4979 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4zkpx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:43:19 crc kubenswrapper[4979]: I0130 21:43:19.636212 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.293607 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.294239 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqdhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-npfvh_openshift-marketplace(568a44ae-c892-48a7-b4c0-2d83606e7b95): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.295488 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-npfvh" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.654547 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1514567063/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.654741 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmmtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2tvd8_openshift-marketplace(3641ad73-644b-4d71-860b-4d8b7e6a3a6d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1514567063/2\": happened during read: context canceled" logger="UnhandledError" Jan 30 21:43:20 crc kubenswrapper[4979]: E0130 21:43:20.656229 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage1514567063/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-operators-2tvd8" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.669584 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.670920 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.676271 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.864371 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.864457 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.864756 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.965778 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.965884 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.965904 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.965923 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.966113 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:23 crc kubenswrapper[4979]: I0130 21:43:23.986000 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:24 crc kubenswrapper[4979]: I0130 21:43:24.006789 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:43:24 crc kubenswrapper[4979]: E0130 21:43:24.611803 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 21:43:24 crc kubenswrapper[4979]: E0130 21:43:24.613180 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rtgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dk444_openshift-marketplace(6ceea51c-f0b8-4de3-be53-f1d857b3a1b8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:24 crc kubenswrapper[4979]: E0130 21:43:24.614393 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dk444" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" Jan 30 21:43:25 crc kubenswrapper[4979]: I0130 21:43:25.444119 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:25 crc kubenswrapper[4979]: I0130 21:43:25.444245 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:27 crc kubenswrapper[4979]: I0130 21:43:27.274653 4979 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-x8j5s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:43:27 crc kubenswrapper[4979]: I0130 21:43:27.274813 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:43:27 crc kubenswrapper[4979]: E0130 21:43:27.930359 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2tvd8" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" Jan 30 21:43:27 crc kubenswrapper[4979]: E0130 21:43:27.930507 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qmzzl" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" Jan 30 21:43:27 crc kubenswrapper[4979]: E0130 21:43:27.930534 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-npfvh" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" Jan 30 21:43:27 crc kubenswrapper[4979]: E0130 21:43:27.930688 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dk444" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" Jan 30 21:43:27 crc kubenswrapper[4979]: I0130 21:43:27.997544 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" event={"ID":"ff61cd4b-2b9f-4588-be96-10038ccc4a92","Type":"ContainerDied","Data":"2fdea5ec5c945a9b137321bd0204027de83c52d16c6cd7e9cca2d07e312e0fe5"} Jan 30 21:43:27 crc kubenswrapper[4979]: I0130 21:43:27.997629 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fdea5ec5c945a9b137321bd0204027de83c52d16c6cd7e9cca2d07e312e0fe5" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.000105 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" event={"ID":"828e6466-447a-47f9-9727-3992db7c27c9","Type":"ContainerDied","Data":"deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096"} Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.000139 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deac1bbcbdad5b5fef8f1539d5a37b05719e08732d51faaf7fef7703be74e096" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.010758 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.015757 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.041755 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:43:28 crc kubenswrapper[4979]: E0130 21:43:28.042074 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.042091 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: E0130 21:43:28.042108 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.042118 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.042339 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="828e6466-447a-47f9-9727-3992db7c27c9" containerName="route-controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.042359 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" containerName="controller-manager" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.042835 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.059046 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.134997 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert\") pod \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135104 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca\") pod \"828e6466-447a-47f9-9727-3992db7c27c9\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135167 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") pod \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135220 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert\") pod \"828e6466-447a-47f9-9727-3992db7c27c9\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135267 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j27sl\" (UniqueName: \"kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl\") pod \"828e6466-447a-47f9-9727-3992db7c27c9\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135314 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles\") pod \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135358 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frqrz\" (UniqueName: \"kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz\") pod \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.135434 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config\") pod \"828e6466-447a-47f9-9727-3992db7c27c9\" (UID: \"828e6466-447a-47f9-9727-3992db7c27c9\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136199 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") pod \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\" (UID: \"ff61cd4b-2b9f-4588-be96-10038ccc4a92\") " Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136241 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ff61cd4b-2b9f-4588-be96-10038ccc4a92" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136332 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "828e6466-447a-47f9-9727-3992db7c27c9" (UID: "828e6466-447a-47f9-9727-3992db7c27c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136422 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config" (OuterVolumeSpecName: "config") pod "828e6466-447a-47f9-9727-3992db7c27c9" (UID: "828e6466-447a-47f9-9727-3992db7c27c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136765 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136831 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.136903 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config" (OuterVolumeSpecName: "config") pod "ff61cd4b-2b9f-4588-be96-10038ccc4a92" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137172 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtgt\" (UniqueName: \"kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137422 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137458 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff61cd4b-2b9f-4588-be96-10038ccc4a92" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137572 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137607 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137627 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.137648 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/828e6466-447a-47f9-9727-3992db7c27c9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.141462 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "828e6466-447a-47f9-9727-3992db7c27c9" (UID: "828e6466-447a-47f9-9727-3992db7c27c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.141597 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl" (OuterVolumeSpecName: "kube-api-access-j27sl") pod "828e6466-447a-47f9-9727-3992db7c27c9" (UID: "828e6466-447a-47f9-9727-3992db7c27c9"). InnerVolumeSpecName "kube-api-access-j27sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.141982 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz" (OuterVolumeSpecName: "kube-api-access-frqrz") pod "ff61cd4b-2b9f-4588-be96-10038ccc4a92" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92"). InnerVolumeSpecName "kube-api-access-frqrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.142880 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff61cd4b-2b9f-4588-be96-10038ccc4a92" (UID: "ff61cd4b-2b9f-4588-be96-10038ccc4a92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.238818 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwtgt\" (UniqueName: \"kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239003 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239086 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239182 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239312 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff61cd4b-2b9f-4588-be96-10038ccc4a92-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239334 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828e6466-447a-47f9-9727-3992db7c27c9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239353 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j27sl\" (UniqueName: \"kubernetes.io/projected/828e6466-447a-47f9-9727-3992db7c27c9-kube-api-access-j27sl\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239372 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frqrz\" (UniqueName: \"kubernetes.io/projected/ff61cd4b-2b9f-4588-be96-10038ccc4a92-kube-api-access-frqrz\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.239389 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff61cd4b-2b9f-4588-be96-10038ccc4a92-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.241645 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.242915 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.246630 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.274944 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwtgt\" (UniqueName: \"kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt\") pod \"route-controller-manager-756df7bd56-4mqfb\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:28 crc kubenswrapper[4979]: I0130 21:43:28.381077 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.006510 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4zkpx" Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.010015 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s" Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.051674 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.056656 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-x8j5s"] Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.066755 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.077877 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828e6466-447a-47f9-9727-3992db7c27c9" path="/var/lib/kubelet/pods/828e6466-447a-47f9-9727-3992db7c27c9/volumes" Jan 30 21:43:29 crc kubenswrapper[4979]: I0130 21:43:29.078680 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4zkpx"] Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.119433 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.119924 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hzvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-454jj_openshift-marketplace(82df7d39-6821-4916-b8c9-534688ca3d5e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.121305 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-454jj" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.256741 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.256925 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snrx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-krrkl_openshift-marketplace(9ced41eb-6843-4dfe-81c7-267a56f75a73): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:30 crc kubenswrapper[4979]: E0130 21:43:30.258414 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-krrkl" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.461780 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.462698 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.465479 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.465719 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.466818 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.466987 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.467265 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.468181 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.468406 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.473295 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.573798 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.573853 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.573906 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.573931 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.573954 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwbb\" (UniqueName: \"kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.675360 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.675432 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.675517 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.675586 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgwbb\" (UniqueName: \"kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.675626 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.676498 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.676883 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.691186 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.698483 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.701705 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgwbb\" (UniqueName: \"kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb\") pod \"controller-manager-68c44f896-2p552\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:30 crc kubenswrapper[4979]: I0130 21:43:30.784553 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:31 crc kubenswrapper[4979]: I0130 21:43:31.077675 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff61cd4b-2b9f-4588-be96-10038ccc4a92" path="/var/lib/kubelet/pods/ff61cd4b-2b9f-4588-be96-10038ccc4a92/volumes" Jan 30 21:43:32 crc kubenswrapper[4979]: I0130 21:43:32.039543 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:43:32 crc kubenswrapper[4979]: I0130 21:43:32.039618 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:43:32 crc kubenswrapper[4979]: I0130 21:43:32.039672 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:43:32 crc kubenswrapper[4979]: I0130 21:43:32.040350 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:43:32 crc kubenswrapper[4979]: I0130 21:43:32.040427 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d" gracePeriod=600 Jan 30 21:43:33 crc kubenswrapper[4979]: E0130 21:43:33.907270 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-454jj" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" Jan 30 21:43:33 crc kubenswrapper[4979]: E0130 21:43:33.907267 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-krrkl" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" Jan 30 21:43:33 crc kubenswrapper[4979]: E0130 21:43:33.951540 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 21:43:33 crc kubenswrapper[4979]: E0130 21:43:33.951816 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gc7xc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sg6j7_openshift-marketplace(444df6ed-3c43-4310-adc6-69ab0a9ea702): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:43:33 crc kubenswrapper[4979]: E0130 21:43:33.953715 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sg6j7" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.049298 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d" exitCode=0 Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.050885 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d"} Jan 30 21:43:34 crc kubenswrapper[4979]: E0130 21:43:34.059405 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sg6j7" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.409847 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pk47q"] Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.457825 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.461063 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.504078 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:43:34 crc kubenswrapper[4979]: I0130 21:43:34.515058 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:43:34 crc kubenswrapper[4979]: W0130 21:43:34.605607 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc138f389_e49e_4c26_b2ee_af169b1c8343.slice/crio-18c23f6f985da2e38cf0d706d168368cd8421368b40bada9a0e8edfd231d5894 WatchSource:0}: Error finding container 18c23f6f985da2e38cf0d706d168368cd8421368b40bada9a0e8edfd231d5894: Status 404 returned error can't find the container with id 18c23f6f985da2e38cf0d706d168368cd8421368b40bada9a0e8edfd231d5894 Jan 30 21:43:34 crc kubenswrapper[4979]: W0130 21:43:34.607530 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podceeab3d6_4012_4d7b_ae04_fc3829fafd53.slice/crio-086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29 WatchSource:0}: Error finding container 086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29: Status 404 returned error can't find the container with id 086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29 Jan 30 21:43:34 crc kubenswrapper[4979]: W0130 21:43:34.614006 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6cce4b7_6306_43b1_8e2d_e4a29ec3bd6b.slice/crio-f4a830a09061a5933a998451c777de577ea08083a40015478a63156286038c77 WatchSource:0}: Error finding container f4a830a09061a5933a998451c777de577ea08083a40015478a63156286038c77: Status 404 returned error can't find the container with id f4a830a09061a5933a998451c777de577ea08083a40015478a63156286038c77 Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.058980 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" event={"ID":"c138f389-e49e-4c26-b2ee-af169b1c8343","Type":"ContainerStarted","Data":"18c23f6f985da2e38cf0d706d168368cd8421368b40bada9a0e8edfd231d5894"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.060878 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pk47q" event={"ID":"d0632938-c88a-4c22-b0e7-8f7473532f07","Type":"ContainerStarted","Data":"cea6153ed06c9de30a36b68e36f6f2955a3c78913f12aaf4be6fdc08005dafe9"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.062507 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" event={"ID":"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b","Type":"ContainerStarted","Data":"f4a830a09061a5933a998451c777de577ea08083a40015478a63156286038c77"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.063922 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf5abe88-43e9-47ae-87fc-9163bd1aec5e","Type":"ContainerStarted","Data":"730bda6f6ba79a0d724889d0d885e5fa44125a1c153bf8e55571376fa265a6a7"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.065182 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ceeab3d6-4012-4d7b-ae04-fc3829fafd53","Type":"ContainerStarted","Data":"086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.069659 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hwb2t" event={"ID":"21b53e08-d25e-41ab-a180-4b852eb77c8c","Type":"ContainerStarted","Data":"6ff5511a83d2a904767161a15f8ce1841610d209646439ad72a484c4a5b712cd"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.072212 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.072300 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.100492 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b"} Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.100586 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.443779 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.444341 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.443873 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:35 crc kubenswrapper[4979]: I0130 21:43:35.444464 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.100573 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" event={"ID":"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b","Type":"ContainerStarted","Data":"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.101392 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.106679 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ceeab3d6-4012-4d7b-ae04-fc3829fafd53","Type":"ContainerStarted","Data":"781bc5e2c22325e2f70b4c7a950fbfb8ab9d6654a493cdce31f3a0b0d7a6013c"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.107204 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.109121 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" event={"ID":"c138f389-e49e-4c26-b2ee-af169b1c8343","Type":"ContainerStarted","Data":"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.110017 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.112783 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pk47q" event={"ID":"d0632938-c88a-4c22-b0e7-8f7473532f07","Type":"ContainerStarted","Data":"73784d244c4b4f76027150ba2267d5951c452f9b420bcd2ddc54ad5d64244ccb"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.112812 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pk47q" event={"ID":"d0632938-c88a-4c22-b0e7-8f7473532f07","Type":"ContainerStarted","Data":"5d52b8c7e5fa9243c158fc3cb8200cb6a4f35c94b30d14448faa99dfa0683260"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.115783 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf5abe88-43e9-47ae-87fc-9163bd1aec5e","Type":"ContainerStarted","Data":"8fde1572bb636a4d23b5e24e14b788b050124ee0ad5e961a05afc8ed632de43e"} Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.116595 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.116635 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.117467 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.127820 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" podStartSLOduration=20.127792969 podStartE2EDuration="20.127792969s" podCreationTimestamp="2026-01-30 21:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:36.124456176 +0000 UTC m=+212.085703209" watchObservedRunningTime="2026-01-30 21:43:36.127792969 +0000 UTC m=+212.089040012" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.157516 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" podStartSLOduration=20.157499098 podStartE2EDuration="20.157499098s" podCreationTimestamp="2026-01-30 21:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:36.156872241 +0000 UTC m=+212.118119274" watchObservedRunningTime="2026-01-30 21:43:36.157499098 +0000 UTC m=+212.118746121" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.194429 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pk47q" podStartSLOduration=191.194387607 podStartE2EDuration="3m11.194387607s" podCreationTimestamp="2026-01-30 21:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:36.191618699 +0000 UTC m=+212.152865742" watchObservedRunningTime="2026-01-30 21:43:36.194387607 +0000 UTC m=+212.155634640" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.216968 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=13.216943196 podStartE2EDuration="13.216943196s" podCreationTimestamp="2026-01-30 21:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:36.21674522 +0000 UTC m=+212.177992273" watchObservedRunningTime="2026-01-30 21:43:36.216943196 +0000 UTC m=+212.178190229" Jan 30 21:43:36 crc kubenswrapper[4979]: I0130 21:43:36.235868 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=18.235846284 podStartE2EDuration="18.235846284s" podCreationTimestamp="2026-01-30 21:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:36.235245966 +0000 UTC m=+212.196492999" watchObservedRunningTime="2026-01-30 21:43:36.235846284 +0000 UTC m=+212.197093317" Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.132069 4979 generic.go:334] "Generic (PLEG): container finished" podID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerID="6777c7a712aaeb3b92c712ea13c14e93a0636f80d815df1f08df98f2e3cc68fe" exitCode=0 Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.132170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerDied","Data":"6777c7a712aaeb3b92c712ea13c14e93a0636f80d815df1f08df98f2e3cc68fe"} Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.134630 4979 generic.go:334] "Generic (PLEG): container finished" podID="ceeab3d6-4012-4d7b-ae04-fc3829fafd53" containerID="781bc5e2c22325e2f70b4c7a950fbfb8ab9d6654a493cdce31f3a0b0d7a6013c" exitCode=0 Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.134758 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ceeab3d6-4012-4d7b-ae04-fc3829fafd53","Type":"ContainerDied","Data":"781bc5e2c22325e2f70b4c7a950fbfb8ab9d6654a493cdce31f3a0b0d7a6013c"} Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.138480 4979 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwb2t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 21:43:37 crc kubenswrapper[4979]: I0130 21:43:37.138755 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwb2t" podUID="21b53e08-d25e-41ab-a180-4b852eb77c8c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.144591 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerStarted","Data":"66b10ec48352a0a5598a324fadbde93f516e9ce5018944e53e2f4c6a14a933a7"} Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.164679 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wjwlb" podStartSLOduration=3.300609671 podStartE2EDuration="1m4.164646854s" podCreationTimestamp="2026-01-30 21:42:34 +0000 UTC" firstStartedPulling="2026-01-30 21:42:36.707860465 +0000 UTC m=+152.669107498" lastFinishedPulling="2026-01-30 21:43:37.571897638 +0000 UTC m=+213.533144681" observedRunningTime="2026-01-30 21:43:38.161799135 +0000 UTC m=+214.123046168" watchObservedRunningTime="2026-01-30 21:43:38.164646854 +0000 UTC m=+214.125893917" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.407194 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.493679 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access\") pod \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.493774 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir\") pod \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\" (UID: \"ceeab3d6-4012-4d7b-ae04-fc3829fafd53\") " Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.493921 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ceeab3d6-4012-4d7b-ae04-fc3829fafd53" (UID: "ceeab3d6-4012-4d7b-ae04-fc3829fafd53"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.494099 4979 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.505797 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ceeab3d6-4012-4d7b-ae04-fc3829fafd53" (UID: "ceeab3d6-4012-4d7b-ae04-fc3829fafd53"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:38 crc kubenswrapper[4979]: I0130 21:43:38.595454 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceeab3d6-4012-4d7b-ae04-fc3829fafd53-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:39 crc kubenswrapper[4979]: I0130 21:43:39.154124 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ceeab3d6-4012-4d7b-ae04-fc3829fafd53","Type":"ContainerDied","Data":"086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29"} Jan 30 21:43:39 crc kubenswrapper[4979]: I0130 21:43:39.154179 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="086af074250ba041c6da43c2565f862b91305b9e93839103bada96442fef7a29" Jan 30 21:43:39 crc kubenswrapper[4979]: I0130 21:43:39.154994 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:43:44 crc kubenswrapper[4979]: I0130 21:43:44.871587 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:43:44 crc kubenswrapper[4979]: I0130 21:43:44.872166 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:43:45 crc kubenswrapper[4979]: I0130 21:43:45.443153 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:43:45 crc kubenswrapper[4979]: I0130 21:43:45.457889 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hwb2t" Jan 30 21:43:45 crc kubenswrapper[4979]: I0130 21:43:45.513388 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.224391 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerStarted","Data":"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.229062 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerDied","Data":"4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.229087 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerID="4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.232419 4979 generic.go:334] "Generic (PLEG): container finished" podID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerID="20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.232484 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerDied","Data":"20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.235708 4979 generic.go:334] "Generic (PLEG): container finished" podID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerID="d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.235741 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerDied","Data":"d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.238333 4979 generic.go:334] "Generic (PLEG): container finished" podID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerID="9c8374b15b5619f4f1304cf75cea07e98769e40d36978831645aa6ad442f9748" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.238418 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerDied","Data":"9c8374b15b5619f4f1304cf75cea07e98769e40d36978831645aa6ad442f9748"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.241823 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerStarted","Data":"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516"} Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.244176 4979 generic.go:334] "Generic (PLEG): container finished" podID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerID="8ce38f5c2d102434af1616c327c364faa35dac4f176a6f600fbf112072871235" exitCode=0 Jan 30 21:43:50 crc kubenswrapper[4979]: I0130 21:43:50.244217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerDied","Data":"8ce38f5c2d102434af1616c327c364faa35dac4f176a6f600fbf112072871235"} Jan 30 21:43:51 crc kubenswrapper[4979]: I0130 21:43:51.255382 4979 generic.go:334] "Generic (PLEG): container finished" podID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerID="c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126" exitCode=0 Jan 30 21:43:51 crc kubenswrapper[4979]: I0130 21:43:51.255468 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerDied","Data":"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126"} Jan 30 21:43:51 crc kubenswrapper[4979]: I0130 21:43:51.258690 4979 generic.go:334] "Generic (PLEG): container finished" podID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerID="bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516" exitCode=0 Jan 30 21:43:51 crc kubenswrapper[4979]: I0130 21:43:51.258736 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerDied","Data":"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516"} Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.266229 4979 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267405 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636" gracePeriod=15 Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267469 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978" gracePeriod=15 Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267565 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229" gracePeriod=15 Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267654 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c" gracePeriod=15 Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267818 4979 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.267580 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4" gracePeriod=15 Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268268 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268289 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268301 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268309 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268327 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268336 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268346 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268354 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268367 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceeab3d6-4012-4d7b-ae04-fc3829fafd53" containerName="pruner" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268375 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceeab3d6-4012-4d7b-ae04-fc3829fafd53" containerName="pruner" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268389 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268397 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268414 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268421 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 21:44:13 crc kubenswrapper[4979]: E0130 21:44:13.268430 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268438 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268567 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268583 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268597 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceeab3d6-4012-4d7b-ae04-fc3829fafd53" containerName="pruner" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268609 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268622 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268637 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.268650 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.277123 4979 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.278703 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.284435 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425115 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425269 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425334 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425402 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425464 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425513 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425542 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.425572 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.527885 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528069 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528127 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528165 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528217 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528237 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528257 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528272 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528304 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528352 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528389 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528442 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528508 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528499 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528530 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.528596 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.878758 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.880632 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:13 crc kubenswrapper[4979]: I0130 21:44:13.881560 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c" exitCode=2 Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.894239 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.897235 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.898737 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978" exitCode=0 Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.898811 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4" exitCode=0 Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.898835 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229" exitCode=0 Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.898879 4979 scope.go:117] "RemoveContainer" containerID="aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd" Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.903531 4979 generic.go:334] "Generic (PLEG): container finished" podID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" containerID="8fde1572bb636a4d23b5e24e14b788b050124ee0ad5e961a05afc8ed632de43e" exitCode=0 Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.903637 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf5abe88-43e9-47ae-87fc-9163bd1aec5e","Type":"ContainerDied","Data":"8fde1572bb636a4d23b5e24e14b788b050124ee0ad5e961a05afc8ed632de43e"} Jan 30 21:44:14 crc kubenswrapper[4979]: I0130 21:44:14.905203 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: I0130 21:44:15.073071 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.621515 4979 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.622656 4979 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.623110 4979 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.623566 4979 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.624272 4979 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:15 crc kubenswrapper[4979]: I0130 21:44:15.624332 4979 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.624709 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="200ms" Jan 30 21:44:15 crc kubenswrapper[4979]: E0130 21:44:15.825793 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="400ms" Jan 30 21:44:15 crc kubenswrapper[4979]: I0130 21:44:15.923068 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:15 crc kubenswrapper[4979]: I0130 21:44:15.924009 4979 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636" exitCode=0 Jan 30 21:44:16 crc kubenswrapper[4979]: E0130 21:44:16.095452 4979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-2tvd8.188fa052ce9d6823 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-2tvd8,UID:3641ad73-644b-4d71-860b-4d8b7e6a3a6d,APIVersion:v1,ResourceVersion:28136,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 24.835s (24.835s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,LastTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:44:16 crc kubenswrapper[4979]: E0130 21:44:16.227290 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="800ms" Jan 30 21:44:17 crc kubenswrapper[4979]: E0130 21:44:17.028581 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="1.6s" Jan 30 21:44:18 crc kubenswrapper[4979]: E0130 21:44:18.315730 4979 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.316708 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:18 crc kubenswrapper[4979]: E0130 21:44:18.629890 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="3.2s" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.670763 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.671454 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.822113 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir\") pod \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.822874 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access\") pod \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.822242 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf5abe88-43e9-47ae-87fc-9163bd1aec5e" (UID: "bf5abe88-43e9-47ae-87fc-9163bd1aec5e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.823066 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock\") pod \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\" (UID: \"bf5abe88-43e9-47ae-87fc-9163bd1aec5e\") " Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.823211 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock" (OuterVolumeSpecName: "var-lock") pod "bf5abe88-43e9-47ae-87fc-9163bd1aec5e" (UID: "bf5abe88-43e9-47ae-87fc-9163bd1aec5e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.823875 4979 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.823964 4979 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.832295 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf5abe88-43e9-47ae-87fc-9163bd1aec5e" (UID: "bf5abe88-43e9-47ae-87fc-9163bd1aec5e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.925506 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf5abe88-43e9-47ae-87fc-9163bd1aec5e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.946316 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf5abe88-43e9-47ae-87fc-9163bd1aec5e","Type":"ContainerDied","Data":"730bda6f6ba79a0d724889d0d885e5fa44125a1c153bf8e55571376fa265a6a7"} Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.946374 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.946396 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="730bda6f6ba79a0d724889d0d885e5fa44125a1c153bf8e55571376fa265a6a7" Jan 30 21:44:18 crc kubenswrapper[4979]: I0130 21:44:18.966110 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:19 crc kubenswrapper[4979]: E0130 21:44:19.643678 4979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-2tvd8.188fa052ce9d6823 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-2tvd8,UID:3641ad73-644b-4d71-860b-4d8b7e6a3a6d,APIVersion:v1,ResourceVersion:28136,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 24.835s (24.835s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,LastTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.257576 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.261826 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.262682 4979 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.263153 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266121 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266165 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266208 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266667 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266698 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.266713 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.368051 4979 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.368088 4979 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.368101 4979 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:21 crc kubenswrapper[4979]: E0130 21:44:21.831327 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="6.4s" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.966592 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.967424 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.983502 4979 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:21 crc kubenswrapper[4979]: I0130 21:44:21.983676 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:23 crc kubenswrapper[4979]: I0130 21:44:23.084433 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 21:44:24 crc kubenswrapper[4979]: I0130 21:44:24.085857 4979 scope.go:117] "RemoveContainer" containerID="b3e311768adb3f1c6c70519c367bb1aea239b23be94b00848000c7094af21978" Jan 30 21:44:24 crc kubenswrapper[4979]: E0130 21:44:24.159415 4979 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" volumeName="registry-storage" Jan 30 21:44:24 crc kubenswrapper[4979]: I0130 21:44:24.989704 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.073005 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.571268 4979 scope.go:117] "RemoveContainer" containerID="aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd" Jan 30 21:44:25 crc kubenswrapper[4979]: E0130 21:44:25.572950 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\": container with ID starting with aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd not found: ID does not exist" containerID="aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.572989 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd"} err="failed to get container status \"aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\": rpc error: code = NotFound desc = could not find container \"aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd\": container with ID starting with aa92fe6c9ce2bcea20fe11e2a369261051baaee308fb7c47bcc5a20c20791fbd not found: ID does not exist" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.573012 4979 scope.go:117] "RemoveContainer" containerID="af79f7b22ada7c6c2355c0907966f322361e1cf46a823b3e04d1df6de9dc6ec4" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.722112 4979 scope.go:117] "RemoveContainer" containerID="5e0ac879978301a1d623f727a1bfc3039934c760d43d499762a5ab7035610229" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.879081 4979 scope.go:117] "RemoveContainer" containerID="19ef0167a70ce055ce8ee11beb9491a54f70587220870e0931ea2962adf05a3c" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.906816 4979 scope.go:117] "RemoveContainer" containerID="3fd94a767da4898f53358333f3844ae997492075a88df0115b5fb0239d7a6636" Jan 30 21:44:25 crc kubenswrapper[4979]: I0130 21:44:25.943218 4979 scope.go:117] "RemoveContainer" containerID="cd69f86531997937f8fa2b06bcfe927e6be26c6d2dab60a838897eacca23f486" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.004073 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5c7960b8aed2f0a8dcc77f54da71a656f159fd58147502658fe9b679616525d5"} Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.013280 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.013486 4979 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42" exitCode=1 Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.013738 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42"} Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.014637 4979 scope.go:117] "RemoveContainer" containerID="019d8901989f4de97b48ae8bc9ee2995564b3de2a5d4720159950c6c83804d42" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.014979 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.015505 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.071580 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.073499 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.073905 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.136535 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.136613 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:26 crc kubenswrapper[4979]: E0130 21:44:26.137647 4979 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:26 crc kubenswrapper[4979]: I0130 21:44:26.138296 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:27 crc kubenswrapper[4979]: I0130 21:44:27.033531 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d22c14f49426106c827dc4b7a7b9fead3e323787335d91190c8e8d4de65efbdb"} Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.140954 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:44:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:9bde862635f230b66b73aad05940f6cf2c0555a47fe1db330a20724acca8d497\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:db103f9b4d410efdd30da231ffebe8f093377e6c1e4064ddc68046925eb4627f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1680805611},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:63fbea3b7080a0b403eaf16b3fed3ceda4cbba1fb0d71797d201d97e0745475c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:eecad2fc166355255907130f5b4a16ed876f792fe4420ae700dbc3741c3a382e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202122991},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:84bdfaa1280b6132c66ed59de2078e0bd7672cde009357354bf028b9a1673a95\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d9b8bab836aa892d91fb35d5c17765fc6fa4b62c78de50c2a7d885c33cc5415d\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1187449074},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.141266 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.141729 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.142368 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.142640 4979 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:27 crc kubenswrapper[4979]: E0130 21:44:27.142666 4979 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:44:28 crc kubenswrapper[4979]: I0130 21:44:28.045571 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerStarted","Data":"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631"} Jan 30 21:44:28 crc kubenswrapper[4979]: I0130 21:44:28.059625 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:44:28 crc kubenswrapper[4979]: E0130 21:44:28.232076 4979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.143:6443: connect: connection refused" interval="7s" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.052936 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9b719cf0f25dab63a9a3bef1d08691b4a3f96749c35e20461904c01ac35822c1"} Jan 30 21:44:29 crc kubenswrapper[4979]: E0130 21:44:29.054001 4979 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.054109 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.054607 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.055958 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerStarted","Data":"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.056730 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.056910 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.057296 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.059115 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerStarted","Data":"165fe5bf1fc47247f3d6114846a10d0f59102aaf37fc99f103ab83026418760f"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.060240 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.060811 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.061262 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.061498 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.061725 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerStarted","Data":"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.062524 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.062878 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.063242 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.063486 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.063699 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.065353 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.066449 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f134a187b1223352e6962f82641cf0aa50b285311821a652d66179e0adabda49"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.066567 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.066890 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.067240 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.067480 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.067813 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.081552 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerStarted","Data":"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.081594 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerStarted","Data":"74b0411650916af7082a09037409fe4233d0a54527d4cc2f176ba5e845dc24a2"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.083097 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.083356 4979 status_manager.go:851] "Failed to get status for pod" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" pod="openshift-marketplace/community-operators-454jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-454jj\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.083783 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.083960 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084188 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerStarted","Data":"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084358 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084555 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084704 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.084845 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.085140 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.087177 4979 status_manager.go:851] "Failed to get status for pod" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" pod="openshift-marketplace/community-operators-454jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-454jj\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.087518 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.092181 4979 status_manager.go:851] "Failed to get status for pod" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" pod="openshift-marketplace/redhat-marketplace-qmzzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmzzl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.092627 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.092834 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093003 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093425 4979 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fff1ba77657b0caf825d8174df4ead60c8cb6239ddf73cc986d72ad160a24312" exitCode=0 Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093424 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093753 4979 status_manager.go:851] "Failed to get status for pod" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" pod="openshift-marketplace/community-operators-454jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-454jj\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093955 4979 status_manager.go:851] "Failed to get status for pod" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" pod="openshift-marketplace/redhat-operators-2tvd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2tvd8\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093987 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fff1ba77657b0caf825d8174df4ead60c8cb6239ddf73cc986d72ad160a24312"} Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.093964 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.094054 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.094155 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: E0130 21:44:29.094347 4979 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.094422 4979 status_manager.go:851] "Failed to get status for pod" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" pod="openshift-marketplace/redhat-marketplace-qmzzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmzzl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.094744 4979 status_manager.go:851] "Failed to get status for pod" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" pod="openshift-marketplace/community-operators-454jj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-454jj\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.094974 4979 status_manager.go:851] "Failed to get status for pod" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" pod="openshift-marketplace/redhat-operators-2tvd8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2tvd8\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.095284 4979 status_manager.go:851] "Failed to get status for pod" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" pod="openshift-marketplace/redhat-marketplace-qmzzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qmzzl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.095554 4979 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.095824 4979 status_manager.go:851] "Failed to get status for pod" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.096069 4979 status_manager.go:851] "Failed to get status for pod" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" pod="openshift-marketplace/community-operators-krrkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-krrkl\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.096240 4979 status_manager.go:851] "Failed to get status for pod" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" pod="openshift-marketplace/redhat-operators-sg6j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sg6j7\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.096411 4979 status_manager.go:851] "Failed to get status for pod" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" pod="openshift-marketplace/certified-operators-dk444" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dk444\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: I0130 21:44:29.096607 4979 status_manager.go:851] "Failed to get status for pod" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" pod="openshift-marketplace/certified-operators-npfvh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-npfvh\": dial tcp 38.102.83.143:6443: connect: connection refused" Jan 30 21:44:29 crc kubenswrapper[4979]: E0130 21:44:29.646687 4979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.143:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-2tvd8.188fa052ce9d6823 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-2tvd8,UID:3641ad73-644b-4d71-860b-4d8b7e6a3a6d,APIVersion:v1,ResourceVersion:28136,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 24.835s (24.835s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,LastTimestamp:2026-01-30 21:44:16.094079011 +0000 UTC m=+252.055326034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:44:30 crc kubenswrapper[4979]: I0130 21:44:30.102619 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f1686de9e5686295b98660d6de6c3153fe990ec48d7c5f2b7885125a296d0332"} Jan 30 21:44:30 crc kubenswrapper[4979]: I0130 21:44:30.102682 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"223bfae37478509ca7d8f8d6c295f41d7165a10eb31ad76c77be44273b7b0c0f"} Jan 30 21:44:31 crc kubenswrapper[4979]: I0130 21:44:31.146172 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f0937ba2cdd38e1742698d325d893e8ab8922542010d8f0b3af5a2e8bcaa307a"} Jan 30 21:44:31 crc kubenswrapper[4979]: I0130 21:44:31.146943 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c80b5290a41693c0e021e97a7edf0c42a4c69ce69da49b5be85f4aa76f7214f5"} Jan 30 21:44:31 crc kubenswrapper[4979]: I0130 21:44:31.807887 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:44:31 crc kubenswrapper[4979]: I0130 21:44:31.814137 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.157126 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c4d9222535a37568ced81697c84e701762a1cf4c9acfd49ba2efb7f1ff81d184"} Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.157636 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.157672 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.157972 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.576401 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.576491 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.630425 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.700627 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.701298 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.756019 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.934520 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.934582 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:44:32 crc kubenswrapper[4979]: I0130 21:44:32.979003 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.213470 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.217102 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.219232 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.424956 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.425056 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:44:33 crc kubenswrapper[4979]: I0130 21:44:33.468180 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:44:34 crc kubenswrapper[4979]: I0130 21:44:34.215821 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.155565 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.155657 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.209554 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.259368 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.818387 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.818448 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.877152 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.993143 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:44:35 crc kubenswrapper[4979]: I0130 21:44:35.993228 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.062919 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.139055 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.139216 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.139244 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.146882 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.227550 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:44:36 crc kubenswrapper[4979]: I0130 21:44:36.229749 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:44:37 crc kubenswrapper[4979]: I0130 21:44:37.185864 4979 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:37 crc kubenswrapper[4979]: I0130 21:44:37.232465 4979 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12b694c6-7029-4077-a1d9-ffd9919dd5ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:29Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://223bfae37478509ca7d8f8d6c295f41d7165a10eb31ad76c77be44273b7b0c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80b5290a41693c0e021e97a7edf0c42a4c69ce69da49b5be85f4aa76f7214f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1686de9e5686295b98660d6de6c3153fe990ec48d7c5f2b7885125a296d0332\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d9222535a37568ced81697c84e701762a1cf4c9acfd49ba2efb7f1ff81d184\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0937ba2cdd38e1742698d325d893e8ab8922542010d8f0b3af5a2e8bcaa307a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:44:30Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff1ba77657b0caf825d8174df4ead60c8cb6239ddf73cc986d72ad160a24312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff1ba77657b0caf825d8174df4ead60c8cb6239ddf73cc986d72ad160a24312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"12b694c6-7029-4077-a1d9-ffd9919dd5ee\": field is immutable" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.064326 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.075161 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2d572690-3742-4c1d-b3e0-d43d6664ef66" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.192297 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.192335 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.196848 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2d572690-3742-4c1d-b3e0-d43d6664ef66" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.197184 4979 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://223bfae37478509ca7d8f8d6c295f41d7165a10eb31ad76c77be44273b7b0c0f" Jan 30 21:44:38 crc kubenswrapper[4979]: I0130 21:44:38.197201 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:39 crc kubenswrapper[4979]: I0130 21:44:39.198971 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:39 crc kubenswrapper[4979]: I0130 21:44:39.199015 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:39 crc kubenswrapper[4979]: I0130 21:44:39.202970 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2d572690-3742-4c1d-b3e0-d43d6664ef66" Jan 30 21:44:46 crc kubenswrapper[4979]: I0130 21:44:46.211645 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 21:44:46 crc kubenswrapper[4979]: I0130 21:44:46.695156 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 21:44:46 crc kubenswrapper[4979]: I0130 21:44:46.822356 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 21:44:46 crc kubenswrapper[4979]: I0130 21:44:46.896452 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 21:44:47 crc kubenswrapper[4979]: I0130 21:44:47.439866 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 21:44:47 crc kubenswrapper[4979]: I0130 21:44:47.812655 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.175209 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.207734 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.302394 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.528422 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.606396 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.824724 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.902423 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.915927 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 21:44:48 crc kubenswrapper[4979]: I0130 21:44:48.953921 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.062145 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.078269 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.093072 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.141179 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.239640 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.406660 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.480236 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.502440 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.522023 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.551189 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.579627 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.591825 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.596312 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.605214 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.605262 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.760207 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.776598 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.957701 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 21:44:49 crc kubenswrapper[4979]: I0130 21:44:49.987079 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.059170 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.156906 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.261754 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.263472 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.398955 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.486811 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.506350 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.563263 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.613797 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.690861 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.743653 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 21:44:50 crc kubenswrapper[4979]: I0130 21:44:50.802101 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.030257 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.132310 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.209438 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.215005 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.308932 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.368543 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.391458 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.416718 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.449402 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.491664 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.492752 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.536592 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.630279 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.757605 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.813627 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 21:44:51 crc kubenswrapper[4979]: I0130 21:44:51.955608 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.067706 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.085132 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.099867 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.221321 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.222090 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.235911 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.258696 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.326137 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.329682 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.333592 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.422728 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.502406 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.528813 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.535459 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.556276 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.563007 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.564850 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.617871 4979 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.617922 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.618147 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-npfvh" podStartSLOduration=32.724268139 podStartE2EDuration="2m20.618073447s" podCreationTimestamp="2026-01-30 21:42:32 +0000 UTC" firstStartedPulling="2026-01-30 21:42:35.581345717 +0000 UTC m=+151.542592750" lastFinishedPulling="2026-01-30 21:44:23.475151025 +0000 UTC m=+259.436398058" observedRunningTime="2026-01-30 21:44:36.903460569 +0000 UTC m=+272.864707602" watchObservedRunningTime="2026-01-30 21:44:52.618073447 +0000 UTC m=+288.579320480" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.623173 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-454jj" podStartSLOduration=30.708594006 podStartE2EDuration="2m20.619026383s" podCreationTimestamp="2026-01-30 21:42:32 +0000 UTC" firstStartedPulling="2026-01-30 21:42:35.661020712 +0000 UTC m=+151.622267755" lastFinishedPulling="2026-01-30 21:44:25.571453069 +0000 UTC m=+261.532700132" observedRunningTime="2026-01-30 21:44:36.919612171 +0000 UTC m=+272.880859204" watchObservedRunningTime="2026-01-30 21:44:52.619026383 +0000 UTC m=+288.580273416" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.624347 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.628460 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.627613 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sg6j7" podStartSLOduration=37.167108538 podStartE2EDuration="2m17.627450513s" podCreationTimestamp="2026-01-30 21:42:35 +0000 UTC" firstStartedPulling="2026-01-30 21:42:39.149181543 +0000 UTC m=+155.110428576" lastFinishedPulling="2026-01-30 21:44:19.609523528 +0000 UTC m=+255.570770551" observedRunningTime="2026-01-30 21:44:36.871554769 +0000 UTC m=+272.832801802" watchObservedRunningTime="2026-01-30 21:44:52.627450513 +0000 UTC m=+288.588697576" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.630237 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmzzl" podStartSLOduration=31.464259558 podStartE2EDuration="2m18.630228499s" podCreationTimestamp="2026-01-30 21:42:34 +0000 UTC" firstStartedPulling="2026-01-30 21:42:37.796412455 +0000 UTC m=+153.757659488" lastFinishedPulling="2026-01-30 21:44:24.962381396 +0000 UTC m=+260.923628429" observedRunningTime="2026-01-30 21:44:36.956162919 +0000 UTC m=+272.917409972" watchObservedRunningTime="2026-01-30 21:44:52.630228499 +0000 UTC m=+288.591475532" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.630744 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2tvd8" podStartSLOduration=40.453309381 podStartE2EDuration="2m17.630703712s" podCreationTimestamp="2026-01-30 21:42:35 +0000 UTC" firstStartedPulling="2026-01-30 21:42:38.91666425 +0000 UTC m=+154.877911283" lastFinishedPulling="2026-01-30 21:44:16.094058581 +0000 UTC m=+252.055305614" observedRunningTime="2026-01-30 21:44:36.935189346 +0000 UTC m=+272.896436379" watchObservedRunningTime="2026-01-30 21:44:52.630703712 +0000 UTC m=+288.591950745" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.636241 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-krrkl" podStartSLOduration=32.766471652999996 podStartE2EDuration="2m20.636225183s" podCreationTimestamp="2026-01-30 21:42:32 +0000 UTC" firstStartedPulling="2026-01-30 21:42:34.426255288 +0000 UTC m=+150.387502321" lastFinishedPulling="2026-01-30 21:44:22.296008808 +0000 UTC m=+258.257255851" observedRunningTime="2026-01-30 21:44:36.855243923 +0000 UTC m=+272.816490966" watchObservedRunningTime="2026-01-30 21:44:52.636225183 +0000 UTC m=+288.597472216" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.637005 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dk444" podStartSLOduration=32.133758108 podStartE2EDuration="2m20.636991814s" podCreationTimestamp="2026-01-30 21:42:32 +0000 UTC" firstStartedPulling="2026-01-30 21:42:35.581770459 +0000 UTC m=+151.543017492" lastFinishedPulling="2026-01-30 21:44:24.085004155 +0000 UTC m=+260.046251198" observedRunningTime="2026-01-30 21:44:36.888050229 +0000 UTC m=+272.849297262" watchObservedRunningTime="2026-01-30 21:44:52.636991814 +0000 UTC m=+288.598238857" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.643940 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.644738 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.644936 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.646056 4979 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.646085 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12b694c6-7029-4077-a1d9-ffd9919dd5ee" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.716592 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.738711 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.797403 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:44:52 crc kubenswrapper[4979]: I0130 21:44:52.888415 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.013479 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.160204 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.172057 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.232874 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.279661 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.297897 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.345707 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.345683168 podStartE2EDuration="16.345683168s" podCreationTimestamp="2026-01-30 21:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:44:53.345680638 +0000 UTC m=+289.306927671" watchObservedRunningTime="2026-01-30 21:44:53.345683168 +0000 UTC m=+289.306930201" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.360922 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=17.360900983 podStartE2EDuration="17.360900983s" podCreationTimestamp="2026-01-30 21:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:44:53.359906306 +0000 UTC m=+289.321153339" watchObservedRunningTime="2026-01-30 21:44:53.360900983 +0000 UTC m=+289.322148016" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.405514 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.505268 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.552793 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.695744 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.756288 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.763415 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.772349 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.801810 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.814263 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 21:44:53 crc kubenswrapper[4979]: I0130 21:44:53.857512 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.002219 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.015287 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.051290 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.134697 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.167456 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.179076 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.212837 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.259910 4979 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.264160 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.273267 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.298790 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.555886 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.602908 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.612141 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.672011 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.751685 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.859192 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:44:54 crc kubenswrapper[4979]: I0130 21:44:54.956438 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.090003 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.111965 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.151625 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.241771 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.263899 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.272566 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.363606 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.382827 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.384406 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.691854 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.703664 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.720628 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.820859 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.911523 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.939608 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 21:44:55 crc kubenswrapper[4979]: I0130 21:44:55.994784 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.045182 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.053554 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.101954 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.115980 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.133755 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.149621 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.255591 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.324921 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.356061 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.380434 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.381976 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.416627 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.452885 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.623246 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.643098 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.644294 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.735071 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.757650 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.790562 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.823415 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.841791 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:44:56 crc kubenswrapper[4979]: I0130 21:44:56.893337 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.016404 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.065797 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.080184 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.081311 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.090575 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.205946 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.279212 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.385479 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.437457 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.548224 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.568499 4979 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.718411 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.737500 4979 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.737723 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.806690 4979 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:44:57 crc kubenswrapper[4979]: I0130 21:44:57.976150 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.013368 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.202478 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.217541 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.312724 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.316677 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.322414 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.329283 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.386785 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.501239 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.563396 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.592988 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.710946 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.823130 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.832088 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.856149 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.925718 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.937235 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 21:44:58 crc kubenswrapper[4979]: I0130 21:44:58.960356 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.077341 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.080205 4979 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.080509 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9b719cf0f25dab63a9a3bef1d08691b4a3f96749c35e20461904c01ac35822c1" gracePeriod=5 Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.104757 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.108450 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.126017 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.141309 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.182898 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.254067 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.257380 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.259013 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.297821 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.677597 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.684357 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.686706 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.844460 4979 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 21:44:59 crc kubenswrapper[4979]: I0130 21:44:59.855350 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.009919 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.019462 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.024769 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.088433 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.097362 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.118261 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.184142 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp"] Jan 30 21:45:00 crc kubenswrapper[4979]: E0130 21:45:00.184420 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" containerName="installer" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.184435 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" containerName="installer" Jan 30 21:45:00 crc kubenswrapper[4979]: E0130 21:45:00.184469 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.184478 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.184594 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.184603 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5abe88-43e9-47ae-87fc-9163bd1aec5e" containerName="installer" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.185127 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.191691 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.192006 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.195945 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp"] Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.293596 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.324563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.324944 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db4r5\" (UniqueName: \"kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.325073 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.364350 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.426322 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.426399 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.426504 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db4r5\" (UniqueName: \"kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.427745 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.446421 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.457455 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db4r5\" (UniqueName: \"kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5\") pod \"collect-profiles-29496825-xndlp\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.500056 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.505083 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.602623 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.619063 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.653627 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.653646 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.707402 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.894829 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.927098 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp"] Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.985914 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:45:00 crc kubenswrapper[4979]: I0130 21:45:00.991140 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.000523 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.103612 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.230855 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.236861 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.343023 4979 generic.go:334] "Generic (PLEG): container finished" podID="bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" containerID="0ffeefd62cefc7a667955d4354abe400003540bade5b7a6dadf2ad36b308e029" exitCode=0 Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.343093 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" event={"ID":"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa","Type":"ContainerDied","Data":"0ffeefd62cefc7a667955d4354abe400003540bade5b7a6dadf2ad36b308e029"} Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.343123 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" event={"ID":"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa","Type":"ContainerStarted","Data":"e67f6eed31b14319537901f85e6944a16de2613d83fcff0ea3359270388b5241"} Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.368355 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.447020 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.511629 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.557567 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.720145 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.729975 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.801397 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 21:45:01 crc kubenswrapper[4979]: I0130 21:45:01.909907 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.121498 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.271360 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.613128 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.757720 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume\") pod \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.758133 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume\") pod \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.758224 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db4r5\" (UniqueName: \"kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5\") pod \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\" (UID: \"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa\") " Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.759325 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" (UID: "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.763082 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5" (OuterVolumeSpecName: "kube-api-access-db4r5") pod "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" (UID: "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa"). InnerVolumeSpecName "kube-api-access-db4r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.763425 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" (UID: "bd6fec1a-296c-4b7e-b06f-cb48697ce0aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.860006 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.860063 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:02 crc kubenswrapper[4979]: I0130 21:45:02.860081 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db4r5\" (UniqueName: \"kubernetes.io/projected/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa-kube-api-access-db4r5\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:03 crc kubenswrapper[4979]: I0130 21:45:03.109557 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 21:45:03 crc kubenswrapper[4979]: I0130 21:45:03.355949 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" event={"ID":"bd6fec1a-296c-4b7e-b06f-cb48697ce0aa","Type":"ContainerDied","Data":"e67f6eed31b14319537901f85e6944a16de2613d83fcff0ea3359270388b5241"} Jan 30 21:45:03 crc kubenswrapper[4979]: I0130 21:45:03.356008 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e67f6eed31b14319537901f85e6944a16de2613d83fcff0ea3359270388b5241" Jan 30 21:45:03 crc kubenswrapper[4979]: I0130 21:45:03.356023 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.363918 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.363996 4979 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9b719cf0f25dab63a9a3bef1d08691b4a3f96749c35e20461904c01ac35822c1" exitCode=137 Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.659721 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.659879 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.787967 4979 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789385 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789565 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789605 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789643 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789692 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789811 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789889 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789946 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.789990 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.790343 4979 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.790370 4979 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.790381 4979 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.790393 4979 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.795762 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:45:04 crc kubenswrapper[4979]: I0130 21:45:04.891314 4979 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.085611 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.085849 4979 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.097406 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.097693 4979 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6222e4b9-7642-4957-b2f8-2b18a8a64b75" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.100698 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.100722 4979 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6222e4b9-7642-4957-b2f8-2b18a8a64b75" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.372345 4979 scope.go:117] "RemoveContainer" containerID="9b719cf0f25dab63a9a3bef1d08691b4a3f96749c35e20461904c01ac35822c1" Jan 30 21:45:05 crc kubenswrapper[4979]: I0130 21:45:05.372399 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:45:18 crc kubenswrapper[4979]: I0130 21:45:18.004381 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 21:45:19 crc kubenswrapper[4979]: I0130 21:45:19.091879 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 21:45:20 crc kubenswrapper[4979]: I0130 21:45:20.246205 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 21:45:20 crc kubenswrapper[4979]: I0130 21:45:20.441431 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 21:45:22 crc kubenswrapper[4979]: I0130 21:45:22.019446 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 21:45:24 crc kubenswrapper[4979]: I0130 21:45:24.202827 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 21:45:24 crc kubenswrapper[4979]: I0130 21:45:24.587010 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 21:45:25 crc kubenswrapper[4979]: I0130 21:45:25.867980 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:45:25 crc kubenswrapper[4979]: I0130 21:45:25.868300 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" podUID="c138f389-e49e-4c26-b2ee-af169b1c8343" containerName="route-controller-manager" containerID="cri-o://31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb" gracePeriod=30 Jan 30 21:45:25 crc kubenswrapper[4979]: I0130 21:45:25.894712 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:45:25 crc kubenswrapper[4979]: I0130 21:45:25.894989 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" podUID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" containerName="controller-manager" containerID="cri-o://c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3" gracePeriod=30 Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.315925 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.322166 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420448 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config\") pod \"c138f389-e49e-4c26-b2ee-af169b1c8343\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420518 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config\") pod \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420548 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca\") pod \"c138f389-e49e-4c26-b2ee-af169b1c8343\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420587 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgwbb\" (UniqueName: \"kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb\") pod \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420649 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles\") pod \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420677 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert\") pod \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420716 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwtgt\" (UniqueName: \"kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt\") pod \"c138f389-e49e-4c26-b2ee-af169b1c8343\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420738 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert\") pod \"c138f389-e49e-4c26-b2ee-af169b1c8343\" (UID: \"c138f389-e49e-4c26-b2ee-af169b1c8343\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.420793 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca\") pod \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\" (UID: \"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b\") " Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.421491 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config" (OuterVolumeSpecName: "config") pod "c138f389-e49e-4c26-b2ee-af169b1c8343" (UID: "c138f389-e49e-4c26-b2ee-af169b1c8343"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.421701 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" (UID: "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.421911 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config" (OuterVolumeSpecName: "config") pod "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" (UID: "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.422555 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca" (OuterVolumeSpecName: "client-ca") pod "c138f389-e49e-4c26-b2ee-af169b1c8343" (UID: "c138f389-e49e-4c26-b2ee-af169b1c8343"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.422702 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" (UID: "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.427626 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt" (OuterVolumeSpecName: "kube-api-access-xwtgt") pod "c138f389-e49e-4c26-b2ee-af169b1c8343" (UID: "c138f389-e49e-4c26-b2ee-af169b1c8343"). InnerVolumeSpecName "kube-api-access-xwtgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.428083 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c138f389-e49e-4c26-b2ee-af169b1c8343" (UID: "c138f389-e49e-4c26-b2ee-af169b1c8343"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.428183 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb" (OuterVolumeSpecName: "kube-api-access-rgwbb") pod "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" (UID: "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b"). InnerVolumeSpecName "kube-api-access-rgwbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.428272 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" (UID: "f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522871 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwtgt\" (UniqueName: \"kubernetes.io/projected/c138f389-e49e-4c26-b2ee-af169b1c8343-kube-api-access-xwtgt\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522929 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c138f389-e49e-4c26-b2ee-af169b1c8343-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522947 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522964 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522982 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.522997 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c138f389-e49e-4c26-b2ee-af169b1c8343-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.523014 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgwbb\" (UniqueName: \"kubernetes.io/projected/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-kube-api-access-rgwbb\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.523057 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.523074 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.525152 4979 generic.go:334] "Generic (PLEG): container finished" podID="c138f389-e49e-4c26-b2ee-af169b1c8343" containerID="31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb" exitCode=0 Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.525210 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.525247 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" event={"ID":"c138f389-e49e-4c26-b2ee-af169b1c8343","Type":"ContainerDied","Data":"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb"} Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.525281 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb" event={"ID":"c138f389-e49e-4c26-b2ee-af169b1c8343","Type":"ContainerDied","Data":"18c23f6f985da2e38cf0d706d168368cd8421368b40bada9a0e8edfd231d5894"} Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.525302 4979 scope.go:117] "RemoveContainer" containerID="31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.530522 4979 generic.go:334] "Generic (PLEG): container finished" podID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" containerID="c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3" exitCode=0 Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.530585 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.530583 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" event={"ID":"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b","Type":"ContainerDied","Data":"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3"} Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.530731 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c44f896-2p552" event={"ID":"f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b","Type":"ContainerDied","Data":"f4a830a09061a5933a998451c777de577ea08083a40015478a63156286038c77"} Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.545806 4979 scope.go:117] "RemoveContainer" containerID="31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb" Jan 30 21:45:26 crc kubenswrapper[4979]: E0130 21:45:26.546302 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb\": container with ID starting with 31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb not found: ID does not exist" containerID="31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.546363 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb"} err="failed to get container status \"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb\": rpc error: code = NotFound desc = could not find container \"31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb\": container with ID starting with 31de2c598015657bfb3bd1ac3039c6afcd8ebbf561d9c2c038ed97c17e7882fb not found: ID does not exist" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.546397 4979 scope.go:117] "RemoveContainer" containerID="c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.557721 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.561958 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-756df7bd56-4mqfb"] Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.566638 4979 scope.go:117] "RemoveContainer" containerID="c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3" Jan 30 21:45:26 crc kubenswrapper[4979]: E0130 21:45:26.567200 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3\": container with ID starting with c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3 not found: ID does not exist" containerID="c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.567248 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3"} err="failed to get container status \"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3\": rpc error: code = NotFound desc = could not find container \"c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3\": container with ID starting with c29ec33af6f322cfc7234e6d9ee61a33ded92abb74675b9514295e8e0860abe3 not found: ID does not exist" Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.574143 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:45:26 crc kubenswrapper[4979]: I0130 21:45:26.577564 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68c44f896-2p552"] Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.083527 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c138f389-e49e-4c26-b2ee-af169b1c8343" path="/var/lib/kubelet/pods/c138f389-e49e-4c26-b2ee-af169b1c8343/volumes" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.084887 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" path="/var/lib/kubelet/pods/f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b/volumes" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.818795 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:27 crc kubenswrapper[4979]: E0130 21:45:27.819259 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" containerName="collect-profiles" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819281 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" containerName="collect-profiles" Jan 30 21:45:27 crc kubenswrapper[4979]: E0130 21:45:27.819311 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" containerName="controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819324 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" containerName="controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: E0130 21:45:27.819345 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c138f389-e49e-4c26-b2ee-af169b1c8343" containerName="route-controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819358 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c138f389-e49e-4c26-b2ee-af169b1c8343" containerName="route-controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819548 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" containerName="collect-profiles" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819571 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c138f389-e49e-4c26-b2ee-af169b1c8343" containerName="route-controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.819595 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6cce4b7-6306-43b1-8e2d-e4a29ec3bd6b" containerName="controller-manager" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.820275 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.828698 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.830143 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.838891 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.838926 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.839593 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840028 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840469 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840693 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840550 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840766 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840768 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840769 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.840928 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.841118 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.847311 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.847373 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j72kl\" (UniqueName: \"kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.847676 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.847766 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.847834 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.848763 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.850617 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.863135 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948686 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzkz6\" (UniqueName: \"kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948777 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948809 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j72kl\" (UniqueName: \"kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948843 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948884 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948907 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948935 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948955 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.948984 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.949861 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.950591 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.950675 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.954293 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:27 crc kubenswrapper[4979]: I0130 21:45:27.980613 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j72kl\" (UniqueName: \"kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl\") pod \"controller-manager-77df74fc74-dxgq2\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.050663 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.050734 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.050790 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.051485 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzkz6\" (UniqueName: \"kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.052096 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.052594 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.054134 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.073891 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzkz6\" (UniqueName: \"kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6\") pod \"route-controller-manager-86c8588966-qn4cs\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.164535 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.187893 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.609497 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.660715 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:28 crc kubenswrapper[4979]: W0130 21:45:28.664653 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e66afa2_6e33_4cc2_8b31_65987d8cd10b.slice/crio-e48be26e4d7f5553ec943d8afad20edb82a71bdca7fcd1ab7d02ddcafffc2115 WatchSource:0}: Error finding container e48be26e4d7f5553ec943d8afad20edb82a71bdca7fcd1ab7d02ddcafffc2115: Status 404 returned error can't find the container with id e48be26e4d7f5553ec943d8afad20edb82a71bdca7fcd1ab7d02ddcafffc2115 Jan 30 21:45:28 crc kubenswrapper[4979]: I0130 21:45:28.702367 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.348992 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.349539 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-454jj" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="registry-server" containerID="cri-o://74b0411650916af7082a09037409fe4233d0a54527d4cc2f176ba5e845dc24a2" gracePeriod=2 Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.552676 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.553355 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-npfvh" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="registry-server" containerID="cri-o://8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7" gracePeriod=2 Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.596259 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" event={"ID":"a46553cf-e4b5-4d15-b590-9c6e06819ab5","Type":"ContainerStarted","Data":"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b"} Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.597484 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" event={"ID":"a46553cf-e4b5-4d15-b590-9c6e06819ab5","Type":"ContainerStarted","Data":"7c14a9171f3c938df9defa43623be8174b1cfbdfa890eb7b412765a04a3f397c"} Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.597633 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.601791 4979 generic.go:334] "Generic (PLEG): container finished" podID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerID="74b0411650916af7082a09037409fe4233d0a54527d4cc2f176ba5e845dc24a2" exitCode=0 Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.601942 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerDied","Data":"74b0411650916af7082a09037409fe4233d0a54527d4cc2f176ba5e845dc24a2"} Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.604143 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.604302 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" event={"ID":"0e66afa2-6e33-4cc2-8b31-65987d8cd10b","Type":"ContainerStarted","Data":"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7"} Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.604418 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" event={"ID":"0e66afa2-6e33-4cc2-8b31-65987d8cd10b","Type":"ContainerStarted","Data":"e48be26e4d7f5553ec943d8afad20edb82a71bdca7fcd1ab7d02ddcafffc2115"} Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.604899 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.617715 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.623809 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" podStartSLOduration=4.623778415 podStartE2EDuration="4.623778415s" podCreationTimestamp="2026-01-30 21:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:29.617848842 +0000 UTC m=+325.579095865" watchObservedRunningTime="2026-01-30 21:45:29.623778415 +0000 UTC m=+325.585025468" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.651941 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" podStartSLOduration=4.651915828 podStartE2EDuration="4.651915828s" podCreationTimestamp="2026-01-30 21:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:29.637729808 +0000 UTC m=+325.598976861" watchObservedRunningTime="2026-01-30 21:45:29.651915828 +0000 UTC m=+325.613162861" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.756829 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.889802 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content\") pod \"82df7d39-6821-4916-b8c9-534688ca3d5e\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.889892 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hzvf\" (UniqueName: \"kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf\") pod \"82df7d39-6821-4916-b8c9-534688ca3d5e\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.889967 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities\") pod \"82df7d39-6821-4916-b8c9-534688ca3d5e\" (UID: \"82df7d39-6821-4916-b8c9-534688ca3d5e\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.891121 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities" (OuterVolumeSpecName: "utilities") pod "82df7d39-6821-4916-b8c9-534688ca3d5e" (UID: "82df7d39-6821-4916-b8c9-534688ca3d5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.900212 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf" (OuterVolumeSpecName: "kube-api-access-7hzvf") pod "82df7d39-6821-4916-b8c9-534688ca3d5e" (UID: "82df7d39-6821-4916-b8c9-534688ca3d5e"). InnerVolumeSpecName "kube-api-access-7hzvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.932814 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.944850 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82df7d39-6821-4916-b8c9-534688ca3d5e" (UID: "82df7d39-6821-4916-b8c9-534688ca3d5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.991738 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content\") pod \"568a44ae-c892-48a7-b4c0-2d83606e7b95\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.991803 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities\") pod \"568a44ae-c892-48a7-b4c0-2d83606e7b95\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.991896 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqdhj\" (UniqueName: \"kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj\") pod \"568a44ae-c892-48a7-b4c0-2d83606e7b95\" (UID: \"568a44ae-c892-48a7-b4c0-2d83606e7b95\") " Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.992257 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.992278 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82df7d39-6821-4916-b8c9-534688ca3d5e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.992295 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hzvf\" (UniqueName: \"kubernetes.io/projected/82df7d39-6821-4916-b8c9-534688ca3d5e-kube-api-access-7hzvf\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.993217 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities" (OuterVolumeSpecName: "utilities") pod "568a44ae-c892-48a7-b4c0-2d83606e7b95" (UID: "568a44ae-c892-48a7-b4c0-2d83606e7b95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:29 crc kubenswrapper[4979]: I0130 21:45:29.996444 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj" (OuterVolumeSpecName: "kube-api-access-kqdhj") pod "568a44ae-c892-48a7-b4c0-2d83606e7b95" (UID: "568a44ae-c892-48a7-b4c0-2d83606e7b95"). InnerVolumeSpecName "kube-api-access-kqdhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.033588 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "568a44ae-c892-48a7-b4c0-2d83606e7b95" (UID: "568a44ae-c892-48a7-b4c0-2d83606e7b95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.093764 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqdhj\" (UniqueName: \"kubernetes.io/projected/568a44ae-c892-48a7-b4c0-2d83606e7b95-kube-api-access-kqdhj\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.093799 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.093809 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568a44ae-c892-48a7-b4c0-2d83606e7b95-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.614172 4979 generic.go:334] "Generic (PLEG): container finished" podID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerID="8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7" exitCode=0 Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.614266 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-npfvh" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.614271 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerDied","Data":"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7"} Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.614655 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-npfvh" event={"ID":"568a44ae-c892-48a7-b4c0-2d83606e7b95","Type":"ContainerDied","Data":"9e701107804895c162dc5dbfb55c5fb4850bb1995cf07bbee85bb8f8a3ce5a6f"} Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.614681 4979 scope.go:117] "RemoveContainer" containerID="8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.618155 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-454jj" event={"ID":"82df7d39-6821-4916-b8c9-534688ca3d5e","Type":"ContainerDied","Data":"897e930b920945770fe85e65189da3f41f538afe25ecb7f6857d9256eed7d54a"} Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.618286 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-454jj" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.637328 4979 scope.go:117] "RemoveContainer" containerID="d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.664237 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.666304 4979 scope.go:117] "RemoveContainer" containerID="82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.666133 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-454jj"] Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.689793 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.693935 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-npfvh"] Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.697370 4979 scope.go:117] "RemoveContainer" containerID="8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7" Jan 30 21:45:30 crc kubenswrapper[4979]: E0130 21:45:30.697792 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7\": container with ID starting with 8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7 not found: ID does not exist" containerID="8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.697829 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7"} err="failed to get container status \"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7\": rpc error: code = NotFound desc = could not find container \"8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7\": container with ID starting with 8c51e0e2b94f2eb3f9361aace79cd88b5f501859f0d6bc2812d13b1d6a92dda7 not found: ID does not exist" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.697859 4979 scope.go:117] "RemoveContainer" containerID="d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874" Jan 30 21:45:30 crc kubenswrapper[4979]: E0130 21:45:30.698135 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874\": container with ID starting with d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874 not found: ID does not exist" containerID="d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.698157 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874"} err="failed to get container status \"d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874\": rpc error: code = NotFound desc = could not find container \"d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874\": container with ID starting with d11e5670ac78c1e3b9a83c18cfdb532283a7ec2b9cf5f910b2721af1ceed1874 not found: ID does not exist" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.698174 4979 scope.go:117] "RemoveContainer" containerID="82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab" Jan 30 21:45:30 crc kubenswrapper[4979]: E0130 21:45:30.698551 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab\": container with ID starting with 82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab not found: ID does not exist" containerID="82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.698588 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab"} err="failed to get container status \"82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab\": rpc error: code = NotFound desc = could not find container \"82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab\": container with ID starting with 82c05ccccb999127770c9c6a457f3a2e6d0d477fca44d23daae3b563a60803ab not found: ID does not exist" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.698609 4979 scope.go:117] "RemoveContainer" containerID="74b0411650916af7082a09037409fe4233d0a54527d4cc2f176ba5e845dc24a2" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.716462 4979 scope.go:117] "RemoveContainer" containerID="8ce38f5c2d102434af1616c327c364faa35dac4f176a6f600fbf112072871235" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.739665 4979 scope.go:117] "RemoveContainer" containerID="bf235c47905ef6c38fcc7f3601d64c6f0ba215a6796ab2b1da97239f211b40de" Jan 30 21:45:30 crc kubenswrapper[4979]: I0130 21:45:30.757438 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.078447 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" path="/var/lib/kubelet/pods/568a44ae-c892-48a7-b4c0-2d83606e7b95/volumes" Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.079094 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" path="/var/lib/kubelet/pods/82df7d39-6821-4916-b8c9-534688ca3d5e/volumes" Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.603446 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.756227 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.756806 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qmzzl" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="registry-server" containerID="cri-o://17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d" gracePeriod=2 Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.960556 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:45:31 crc kubenswrapper[4979]: I0130 21:45:31.960829 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sg6j7" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="registry-server" containerID="cri-o://aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1" gracePeriod=2 Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.166861 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.224453 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities\") pod \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.224571 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqrjw\" (UniqueName: \"kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw\") pod \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.224619 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content\") pod \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\" (UID: \"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.227581 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities" (OuterVolumeSpecName: "utilities") pod "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" (UID: "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.233952 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw" (OuterVolumeSpecName: "kube-api-access-gqrjw") pod "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" (UID: "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5"). InnerVolumeSpecName "kube-api-access-gqrjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.246397 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" (UID: "2b857a3f-c3a5-4851-ba1e-25d9dbc64de5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.317707 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.326129 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqrjw\" (UniqueName: \"kubernetes.io/projected/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-kube-api-access-gqrjw\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.326161 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.326173 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.432205 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities\") pod \"444df6ed-3c43-4310-adc6-69ab0a9ea702\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.432348 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content\") pod \"444df6ed-3c43-4310-adc6-69ab0a9ea702\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.432375 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc7xc\" (UniqueName: \"kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc\") pod \"444df6ed-3c43-4310-adc6-69ab0a9ea702\" (UID: \"444df6ed-3c43-4310-adc6-69ab0a9ea702\") " Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.433013 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities" (OuterVolumeSpecName: "utilities") pod "444df6ed-3c43-4310-adc6-69ab0a9ea702" (UID: "444df6ed-3c43-4310-adc6-69ab0a9ea702"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.438340 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc" (OuterVolumeSpecName: "kube-api-access-gc7xc") pod "444df6ed-3c43-4310-adc6-69ab0a9ea702" (UID: "444df6ed-3c43-4310-adc6-69ab0a9ea702"). InnerVolumeSpecName "kube-api-access-gc7xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.534246 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc7xc\" (UniqueName: \"kubernetes.io/projected/444df6ed-3c43-4310-adc6-69ab0a9ea702-kube-api-access-gc7xc\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.534294 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.596277 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "444df6ed-3c43-4310-adc6-69ab0a9ea702" (UID: "444df6ed-3c43-4310-adc6-69ab0a9ea702"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.638160 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444df6ed-3c43-4310-adc6-69ab0a9ea702-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.644486 4979 generic.go:334] "Generic (PLEG): container finished" podID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerID="17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d" exitCode=0 Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.644552 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerDied","Data":"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d"} Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.644585 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmzzl" event={"ID":"2b857a3f-c3a5-4851-ba1e-25d9dbc64de5","Type":"ContainerDied","Data":"4c62920e03a89d4d5765a230e2b55c002afe184d080ace3bcaa5b06f8f97c1f4"} Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.644601 4979 scope.go:117] "RemoveContainer" containerID="17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.644700 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmzzl" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.654479 4979 generic.go:334] "Generic (PLEG): container finished" podID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerID="aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1" exitCode=0 Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.654528 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerDied","Data":"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1"} Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.654561 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg6j7" event={"ID":"444df6ed-3c43-4310-adc6-69ab0a9ea702","Type":"ContainerDied","Data":"87982f21eeaee850aff8e29886551952617d82411b159837b48e46f7e706dfb9"} Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.654644 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg6j7" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.664819 4979 scope.go:117] "RemoveContainer" containerID="20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.683610 4979 scope.go:117] "RemoveContainer" containerID="3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.697200 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.707313 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sg6j7"] Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.711229 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.714336 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmzzl"] Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.718853 4979 scope.go:117] "RemoveContainer" containerID="17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.719337 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d\": container with ID starting with 17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d not found: ID does not exist" containerID="17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.719391 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d"} err="failed to get container status \"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d\": rpc error: code = NotFound desc = could not find container \"17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d\": container with ID starting with 17f4b4c0d76a6b7b7107e71a09a86e2f4984bca430fb88843fd9b2388858c79d not found: ID does not exist" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.719432 4979 scope.go:117] "RemoveContainer" containerID="20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.719707 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16\": container with ID starting with 20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16 not found: ID does not exist" containerID="20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.719730 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16"} err="failed to get container status \"20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16\": rpc error: code = NotFound desc = could not find container \"20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16\": container with ID starting with 20fb86f0872cb97f8d65a741ec840cf990c4e4271ec0e1ab75757feb3e9fcf16 not found: ID does not exist" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.719747 4979 scope.go:117] "RemoveContainer" containerID="3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.721834 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065\": container with ID starting with 3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065 not found: ID does not exist" containerID="3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.721858 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065"} err="failed to get container status \"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065\": rpc error: code = NotFound desc = could not find container \"3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065\": container with ID starting with 3a46961d14458dc14bffdf2ae60986ecca3341d81551f7b93017e2d6e7acc065 not found: ID does not exist" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.721872 4979 scope.go:117] "RemoveContainer" containerID="aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.738268 4979 scope.go:117] "RemoveContainer" containerID="bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.755609 4979 scope.go:117] "RemoveContainer" containerID="7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.772279 4979 scope.go:117] "RemoveContainer" containerID="aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.772998 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1\": container with ID starting with aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1 not found: ID does not exist" containerID="aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.773115 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1"} err="failed to get container status \"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1\": rpc error: code = NotFound desc = could not find container \"aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1\": container with ID starting with aee23d0d588322363033031771880219b3fd0a826e5d2738095e8f5e05fd13e1 not found: ID does not exist" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.773148 4979 scope.go:117] "RemoveContainer" containerID="bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.773470 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516\": container with ID starting with bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516 not found: ID does not exist" containerID="bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.773486 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516"} err="failed to get container status \"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516\": rpc error: code = NotFound desc = could not find container \"bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516\": container with ID starting with bf7a2dcad99033fb2ed5195d2a6b3e68752230c24288dd07418660ed52c9b516 not found: ID does not exist" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.773502 4979 scope.go:117] "RemoveContainer" containerID="7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d" Jan 30 21:45:32 crc kubenswrapper[4979]: E0130 21:45:32.774282 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d\": container with ID starting with 7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d not found: ID does not exist" containerID="7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d" Jan 30 21:45:32 crc kubenswrapper[4979]: I0130 21:45:32.774337 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d"} err="failed to get container status \"7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d\": rpc error: code = NotFound desc = could not find container \"7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d\": container with ID starting with 7450c3c66e2ead696c3ec4ab566d02ad92aed1081224ebcb948093614186885d not found: ID does not exist" Jan 30 21:45:33 crc kubenswrapper[4979]: I0130 21:45:33.076833 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" path="/var/lib/kubelet/pods/2b857a3f-c3a5-4851-ba1e-25d9dbc64de5/volumes" Jan 30 21:45:33 crc kubenswrapper[4979]: I0130 21:45:33.077942 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" path="/var/lib/kubelet/pods/444df6ed-3c43-4310-adc6-69ab0a9ea702/volumes" Jan 30 21:45:36 crc kubenswrapper[4979]: I0130 21:45:36.190337 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 21:45:36 crc kubenswrapper[4979]: I0130 21:45:36.588212 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:36 crc kubenswrapper[4979]: I0130 21:45:36.588512 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" podUID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" containerName="controller-manager" containerID="cri-o://46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b" gracePeriod=30 Jan 30 21:45:36 crc kubenswrapper[4979]: I0130 21:45:36.609788 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:36 crc kubenswrapper[4979]: I0130 21:45:36.610105 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" podUID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" containerName="route-controller-manager" containerID="cri-o://e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7" gracePeriod=30 Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.195663 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.270675 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303284 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca\") pod \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303372 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert\") pod \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303417 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config\") pod \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303444 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j72kl\" (UniqueName: \"kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl\") pod \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303470 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca\") pod \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303512 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzkz6\" (UniqueName: \"kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6\") pod \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303545 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert\") pod \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303572 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config\") pod \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\" (UID: \"0e66afa2-6e33-4cc2-8b31-65987d8cd10b\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.303595 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles\") pod \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\" (UID: \"a46553cf-e4b5-4d15-b590-9c6e06819ab5\") " Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.304611 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a46553cf-e4b5-4d15-b590-9c6e06819ab5" (UID: "a46553cf-e4b5-4d15-b590-9c6e06819ab5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.304605 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e66afa2-6e33-4cc2-8b31-65987d8cd10b" (UID: "0e66afa2-6e33-4cc2-8b31-65987d8cd10b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.304638 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config" (OuterVolumeSpecName: "config") pod "a46553cf-e4b5-4d15-b590-9c6e06819ab5" (UID: "a46553cf-e4b5-4d15-b590-9c6e06819ab5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.304959 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca" (OuterVolumeSpecName: "client-ca") pod "a46553cf-e4b5-4d15-b590-9c6e06819ab5" (UID: "a46553cf-e4b5-4d15-b590-9c6e06819ab5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.305125 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config" (OuterVolumeSpecName: "config") pod "0e66afa2-6e33-4cc2-8b31-65987d8cd10b" (UID: "0e66afa2-6e33-4cc2-8b31-65987d8cd10b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.318308 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e66afa2-6e33-4cc2-8b31-65987d8cd10b" (UID: "0e66afa2-6e33-4cc2-8b31-65987d8cd10b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.318342 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a46553cf-e4b5-4d15-b590-9c6e06819ab5" (UID: "a46553cf-e4b5-4d15-b590-9c6e06819ab5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.318352 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl" (OuterVolumeSpecName: "kube-api-access-j72kl") pod "a46553cf-e4b5-4d15-b590-9c6e06819ab5" (UID: "a46553cf-e4b5-4d15-b590-9c6e06819ab5"). InnerVolumeSpecName "kube-api-access-j72kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.318425 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6" (OuterVolumeSpecName: "kube-api-access-qzkz6") pod "0e66afa2-6e33-4cc2-8b31-65987d8cd10b" (UID: "0e66afa2-6e33-4cc2-8b31-65987d8cd10b"). InnerVolumeSpecName "kube-api-access-qzkz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.404814 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzkz6\" (UniqueName: \"kubernetes.io/projected/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-kube-api-access-qzkz6\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405147 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405214 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405271 4979 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405393 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405531 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a46553cf-e4b5-4d15-b590-9c6e06819ab5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405596 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46553cf-e4b5-4d15-b590-9c6e06819ab5-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405670 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j72kl\" (UniqueName: \"kubernetes.io/projected/a46553cf-e4b5-4d15-b590-9c6e06819ab5-kube-api-access-j72kl\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.405730 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e66afa2-6e33-4cc2-8b31-65987d8cd10b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.496999 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.687594 4979 generic.go:334] "Generic (PLEG): container finished" podID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" containerID="e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7" exitCode=0 Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.687653 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" event={"ID":"0e66afa2-6e33-4cc2-8b31-65987d8cd10b","Type":"ContainerDied","Data":"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7"} Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.687727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" event={"ID":"0e66afa2-6e33-4cc2-8b31-65987d8cd10b","Type":"ContainerDied","Data":"e48be26e4d7f5553ec943d8afad20edb82a71bdca7fcd1ab7d02ddcafffc2115"} Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.687752 4979 scope.go:117] "RemoveContainer" containerID="e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.688094 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.689893 4979 generic.go:334] "Generic (PLEG): container finished" podID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" containerID="46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b" exitCode=0 Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.689949 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" event={"ID":"a46553cf-e4b5-4d15-b590-9c6e06819ab5","Type":"ContainerDied","Data":"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b"} Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.689966 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.689989 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77df74fc74-dxgq2" event={"ID":"a46553cf-e4b5-4d15-b590-9c6e06819ab5","Type":"ContainerDied","Data":"7c14a9171f3c938df9defa43623be8174b1cfbdfa890eb7b412765a04a3f397c"} Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.708568 4979 scope.go:117] "RemoveContainer" containerID="e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.709367 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7\": container with ID starting with e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7 not found: ID does not exist" containerID="e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.709478 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7"} err="failed to get container status \"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7\": rpc error: code = NotFound desc = could not find container \"e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7\": container with ID starting with e7adea8ae76f9650853fefee1861423a55109fed22080c49370702f49b2846b7 not found: ID does not exist" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.709567 4979 scope.go:117] "RemoveContainer" containerID="46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.726138 4979 scope.go:117] "RemoveContainer" containerID="46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.729578 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b\": container with ID starting with 46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b not found: ID does not exist" containerID="46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.729648 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b"} err="failed to get container status \"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b\": rpc error: code = NotFound desc = could not find container \"46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b\": container with ID starting with 46bde4c700819eff717f7128888dc459e2f57290e857f3e208d3dd3d4c2b797b not found: ID does not exist" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.732143 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.743238 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c8588966-qn4cs"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.758690 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.763725 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77df74fc74-dxgq2"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827301 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827678 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827699 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827723 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827733 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827745 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827751 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827763 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827769 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827783 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827790 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827796 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827802 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827811 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827817 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827823 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827828 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827836 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827843 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827850 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827857 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="extract-content" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827868 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" containerName="route-controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827880 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" containerName="route-controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827892 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827899 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827909 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827916 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="extract-utilities" Jan 30 21:45:37 crc kubenswrapper[4979]: E0130 21:45:37.827926 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" containerName="controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.827934 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" containerName="controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828068 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b857a3f-c3a5-4851-ba1e-25d9dbc64de5" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828077 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="82df7d39-6821-4916-b8c9-534688ca3d5e" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828089 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="444df6ed-3c43-4310-adc6-69ab0a9ea702" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828101 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" containerName="route-controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828108 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" containerName="controller-manager" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828120 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="568a44ae-c892-48a7-b4c0-2d83606e7b95" containerName="registry-server" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.828674 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.832398 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d689d4657-lxzrk"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.833086 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.833864 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.834213 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.834271 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.834438 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.834781 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.864692 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.866512 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.867701 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.867729 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.867953 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.868097 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.868275 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.874272 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.883669 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.886874 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d689d4657-lxzrk"] Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.912970 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913059 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-serving-cert\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913120 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913166 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-config\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913202 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913225 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-client-ca\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913251 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vm6\" (UniqueName: \"kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913302 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9t4h\" (UniqueName: \"kubernetes.io/projected/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-kube-api-access-m9t4h\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:37 crc kubenswrapper[4979]: I0130 21:45:37.913334 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-proxy-ca-bundles\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015250 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-config\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015287 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015310 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-client-ca\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015340 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42vm6\" (UniqueName: \"kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015405 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9t4h\" (UniqueName: \"kubernetes.io/projected/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-kube-api-access-m9t4h\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015434 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-proxy-ca-bundles\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015460 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.015514 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-serving-cert\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.016539 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.016675 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-proxy-ca-bundles\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.016704 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.016735 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-client-ca\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.016941 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-config\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.023170 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.035440 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9t4h\" (UniqueName: \"kubernetes.io/projected/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-kube-api-access-m9t4h\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.036779 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81acdeb8-ebe9-40a3-b25c-cb98a9070c15-serving-cert\") pod \"controller-manager-d689d4657-lxzrk\" (UID: \"81acdeb8-ebe9-40a3-b25c-cb98a9070c15\") " pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.041298 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42vm6\" (UniqueName: \"kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6\") pod \"route-controller-manager-54668d78d7-whlnp\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.186660 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.192298 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.551753 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.634295 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d689d4657-lxzrk"] Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.700853 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" event={"ID":"81acdeb8-ebe9-40a3-b25c-cb98a9070c15","Type":"ContainerStarted","Data":"53118ab5654366884dbd5fc58ab254b9c3fc7947646ab665058039a4a290a7c7"} Jan 30 21:45:38 crc kubenswrapper[4979]: I0130 21:45:38.703995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" event={"ID":"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe","Type":"ContainerStarted","Data":"0597e34480d8b61092d333b15257aceea525575d0d6fd9cd29f2039d26375964"} Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.076980 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e66afa2-6e33-4cc2-8b31-65987d8cd10b" path="/var/lib/kubelet/pods/0e66afa2-6e33-4cc2-8b31-65987d8cd10b/volumes" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.077936 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46553cf-e4b5-4d15-b590-9c6e06819ab5" path="/var/lib/kubelet/pods/a46553cf-e4b5-4d15-b590-9c6e06819ab5/volumes" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.713113 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" event={"ID":"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe","Type":"ContainerStarted","Data":"4d2ce17b90981dd81ba17f344413a819781556f323610aee0de2800fd1a74f2a"} Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.715851 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.715979 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" event={"ID":"81acdeb8-ebe9-40a3-b25c-cb98a9070c15","Type":"ContainerStarted","Data":"a8416ec064734079a5625a41bc2f84b4e946d8353cda9890c83a696cd83ff47c"} Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.716509 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.720781 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.723986 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.736640 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" podStartSLOduration=3.7365810919999998 podStartE2EDuration="3.736581092s" podCreationTimestamp="2026-01-30 21:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:39.734885865 +0000 UTC m=+335.696132898" watchObservedRunningTime="2026-01-30 21:45:39.736581092 +0000 UTC m=+335.697828125" Jan 30 21:45:39 crc kubenswrapper[4979]: I0130 21:45:39.779318 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d689d4657-lxzrk" podStartSLOduration=3.779291685 podStartE2EDuration="3.779291685s" podCreationTimestamp="2026-01-30 21:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:39.777934528 +0000 UTC m=+335.739181571" watchObservedRunningTime="2026-01-30 21:45:39.779291685 +0000 UTC m=+335.740538718" Jan 30 21:45:56 crc kubenswrapper[4979]: I0130 21:45:56.584745 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:56 crc kubenswrapper[4979]: I0130 21:45:56.585710 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" podUID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" containerName="route-controller-manager" containerID="cri-o://4d2ce17b90981dd81ba17f344413a819781556f323610aee0de2800fd1a74f2a" gracePeriod=30 Jan 30 21:45:56 crc kubenswrapper[4979]: I0130 21:45:56.829629 4979 generic.go:334] "Generic (PLEG): container finished" podID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" containerID="4d2ce17b90981dd81ba17f344413a819781556f323610aee0de2800fd1a74f2a" exitCode=0 Jan 30 21:45:56 crc kubenswrapper[4979]: I0130 21:45:56.829688 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" event={"ID":"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe","Type":"ContainerDied","Data":"4d2ce17b90981dd81ba17f344413a819781556f323610aee0de2800fd1a74f2a"} Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.033157 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.097793 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config\") pod \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.097864 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42vm6\" (UniqueName: \"kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6\") pod \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.097930 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert\") pod \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.097982 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca\") pod \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\" (UID: \"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe\") " Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.098958 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" (UID: "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.098978 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config" (OuterVolumeSpecName: "config") pod "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" (UID: "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.103940 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6" (OuterVolumeSpecName: "kube-api-access-42vm6") pod "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" (UID: "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe"). InnerVolumeSpecName "kube-api-access-42vm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.104323 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" (UID: "e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.200010 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.200081 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42vm6\" (UniqueName: \"kubernetes.io/projected/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-kube-api-access-42vm6\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.200096 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.200106 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.837019 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" event={"ID":"e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe","Type":"ContainerDied","Data":"0597e34480d8b61092d333b15257aceea525575d0d6fd9cd29f2039d26375964"} Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.837098 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.837116 4979 scope.go:117] "RemoveContainer" containerID="4d2ce17b90981dd81ba17f344413a819781556f323610aee0de2800fd1a74f2a" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.845690 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:45:57 crc kubenswrapper[4979]: E0130 21:45:57.846250 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" containerName="route-controller-manager" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.846287 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" containerName="route-controller-manager" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.846725 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" containerName="route-controller-manager" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.847725 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.851622 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.851995 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.853190 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.853385 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.853555 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.853562 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.857980 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.888000 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.892193 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-whlnp"] Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.909934 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.909976 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kvkd\" (UniqueName: \"kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.910066 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:57 crc kubenswrapper[4979]: I0130 21:45:57.910181 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.011304 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.011356 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.011393 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.011408 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kvkd\" (UniqueName: \"kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.012876 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.013719 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.017112 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.037908 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kvkd\" (UniqueName: \"kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd\") pod \"route-controller-manager-75f959899b-bxkf4\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.174381 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.613981 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:45:58 crc kubenswrapper[4979]: W0130 21:45:58.617598 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e8cfb9_394f_4387_9a89_95c9cc094c81.slice/crio-ac2999222cb0f1025d0eab9442af87845468f901b290e1764c376b75565b80ab WatchSource:0}: Error finding container ac2999222cb0f1025d0eab9442af87845468f901b290e1764c376b75565b80ab: Status 404 returned error can't find the container with id ac2999222cb0f1025d0eab9442af87845468f901b290e1764c376b75565b80ab Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.845160 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" event={"ID":"e6e8cfb9-394f-4387-9a89-95c9cc094c81","Type":"ContainerStarted","Data":"27c78bbe67cb330eea00620be9ebe5cc4f2f8952e591382ee009e6e4c86bb46e"} Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.845472 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" event={"ID":"e6e8cfb9-394f-4387-9a89-95c9cc094c81","Type":"ContainerStarted","Data":"ac2999222cb0f1025d0eab9442af87845468f901b290e1764c376b75565b80ab"} Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.845487 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.846627 4979 patch_prober.go:28] interesting pod/route-controller-manager-75f959899b-bxkf4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.846674 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 30 21:45:58 crc kubenswrapper[4979]: I0130 21:45:58.868919 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" podStartSLOduration=2.868894358 podStartE2EDuration="2.868894358s" podCreationTimestamp="2026-01-30 21:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:58.867612183 +0000 UTC m=+354.828859236" watchObservedRunningTime="2026-01-30 21:45:58.868894358 +0000 UTC m=+354.830141391" Jan 30 21:45:59 crc kubenswrapper[4979]: I0130 21:45:59.077865 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe" path="/var/lib/kubelet/pods/e24643ae-c1a3-4fbb-83a2-15cbb2cf48fe/volumes" Jan 30 21:45:59 crc kubenswrapper[4979]: I0130 21:45:59.856837 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:46:02 crc kubenswrapper[4979]: I0130 21:46:02.039911 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:46:02 crc kubenswrapper[4979]: I0130 21:46:02.040275 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:46:14 crc kubenswrapper[4979]: I0130 21:46:14.812850 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:46:16 crc kubenswrapper[4979]: I0130 21:46:16.617281 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:46:16 crc kubenswrapper[4979]: I0130 21:46:16.618079 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerName="route-controller-manager" containerID="cri-o://27c78bbe67cb330eea00620be9ebe5cc4f2f8952e591382ee009e6e4c86bb46e" gracePeriod=30 Jan 30 21:46:16 crc kubenswrapper[4979]: I0130 21:46:16.945893 4979 generic.go:334] "Generic (PLEG): container finished" podID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerID="27c78bbe67cb330eea00620be9ebe5cc4f2f8952e591382ee009e6e4c86bb46e" exitCode=0 Jan 30 21:46:16 crc kubenswrapper[4979]: I0130 21:46:16.945944 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" event={"ID":"e6e8cfb9-394f-4387-9a89-95c9cc094c81","Type":"ContainerDied","Data":"27c78bbe67cb330eea00620be9ebe5cc4f2f8952e591382ee009e6e4c86bb46e"} Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.055312 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.087649 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config\") pod \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.087836 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca\") pod \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.087912 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert\") pod \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.087976 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kvkd\" (UniqueName: \"kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd\") pod \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\" (UID: \"e6e8cfb9-394f-4387-9a89-95c9cc094c81\") " Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.088705 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config" (OuterVolumeSpecName: "config") pod "e6e8cfb9-394f-4387-9a89-95c9cc094c81" (UID: "e6e8cfb9-394f-4387-9a89-95c9cc094c81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.088749 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6e8cfb9-394f-4387-9a89-95c9cc094c81" (UID: "e6e8cfb9-394f-4387-9a89-95c9cc094c81"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.095120 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd" (OuterVolumeSpecName: "kube-api-access-6kvkd") pod "e6e8cfb9-394f-4387-9a89-95c9cc094c81" (UID: "e6e8cfb9-394f-4387-9a89-95c9cc094c81"). InnerVolumeSpecName "kube-api-access-6kvkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.097994 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6e8cfb9-394f-4387-9a89-95c9cc094c81" (UID: "e6e8cfb9-394f-4387-9a89-95c9cc094c81"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.188696 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.188726 4979 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6e8cfb9-394f-4387-9a89-95c9cc094c81-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.188736 4979 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e8cfb9-394f-4387-9a89-95c9cc094c81-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.188745 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kvkd\" (UniqueName: \"kubernetes.io/projected/e6e8cfb9-394f-4387-9a89-95c9cc094c81-kube-api-access-6kvkd\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.863821 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl"] Jan 30 21:46:17 crc kubenswrapper[4979]: E0130 21:46:17.864873 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerName="route-controller-manager" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.864970 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerName="route-controller-manager" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.865250 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" containerName="route-controller-manager" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.865854 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.888783 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl"] Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.897650 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc4mj\" (UniqueName: \"kubernetes.io/projected/9d8cb00a-591e-48f0-8da1-4157327277c5-kube-api-access-vc4mj\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.897741 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8cb00a-591e-48f0-8da1-4157327277c5-serving-cert\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.897777 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-client-ca\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.897800 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-config\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.952553 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" event={"ID":"e6e8cfb9-394f-4387-9a89-95c9cc094c81","Type":"ContainerDied","Data":"ac2999222cb0f1025d0eab9442af87845468f901b290e1764c376b75565b80ab"} Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.952614 4979 scope.go:117] "RemoveContainer" containerID="27c78bbe67cb330eea00620be9ebe5cc4f2f8952e591382ee009e6e4c86bb46e" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.952665 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.981998 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.985115 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f959899b-bxkf4"] Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.998757 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc4mj\" (UniqueName: \"kubernetes.io/projected/9d8cb00a-591e-48f0-8da1-4157327277c5-kube-api-access-vc4mj\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.998822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8cb00a-591e-48f0-8da1-4157327277c5-serving-cert\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.998856 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-client-ca\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:17 crc kubenswrapper[4979]: I0130 21:46:17.998875 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-config\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.000533 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-client-ca\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.000549 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cb00a-591e-48f0-8da1-4157327277c5-config\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.006490 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d8cb00a-591e-48f0-8da1-4157327277c5-serving-cert\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.028513 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc4mj\" (UniqueName: \"kubernetes.io/projected/9d8cb00a-591e-48f0-8da1-4157327277c5-kube-api-access-vc4mj\") pod \"route-controller-manager-54668d78d7-tq9cl\" (UID: \"9d8cb00a-591e-48f0-8da1-4157327277c5\") " pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.195073 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.687180 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl"] Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.959435 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" event={"ID":"9d8cb00a-591e-48f0-8da1-4157327277c5","Type":"ContainerStarted","Data":"4c8cb1586ac43df894a84c19b4a0d0c262d63486143a16bfa9f843d91b65ae75"} Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.959495 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" event={"ID":"9d8cb00a-591e-48f0-8da1-4157327277c5","Type":"ContainerStarted","Data":"471896edc903d9fb464e2321c8c835f68325e7578978f0d7e9a3d4c84909a07f"} Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.959814 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:18 crc kubenswrapper[4979]: I0130 21:46:18.977481 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" podStartSLOduration=2.977458458 podStartE2EDuration="2.977458458s" podCreationTimestamp="2026-01-30 21:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:46:18.973824427 +0000 UTC m=+374.935071460" watchObservedRunningTime="2026-01-30 21:46:18.977458458 +0000 UTC m=+374.938705491" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.077998 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e8cfb9-394f-4387-9a89-95c9cc094c81" path="/var/lib/kubelet/pods/e6e8cfb9-394f-4387-9a89-95c9cc094c81/volumes" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.130309 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54668d78d7-tq9cl" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.737339 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsxkg"] Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.738499 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.757478 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsxkg"] Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825436 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-trusted-ca\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825497 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt2bd\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-kube-api-access-kt2bd\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825553 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-certificates\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825574 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2d8b6b0a-955d-42cb-a277-3018daf971ad-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825592 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2d8b6b0a-955d-42cb-a277-3018daf971ad-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825619 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825637 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-tls\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.825656 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-bound-sa-token\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.847284 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927233 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-trusted-ca\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt2bd\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-kube-api-access-kt2bd\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927347 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-certificates\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927370 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2d8b6b0a-955d-42cb-a277-3018daf971ad-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927388 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2d8b6b0a-955d-42cb-a277-3018daf971ad-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927412 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-tls\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.927431 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-bound-sa-token\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.928477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2d8b6b0a-955d-42cb-a277-3018daf971ad-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.929098 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-certificates\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.929714 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b6b0a-955d-42cb-a277-3018daf971ad-trusted-ca\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.942855 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-registry-tls\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.942892 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2d8b6b0a-955d-42cb-a277-3018daf971ad-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.946353 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-bound-sa-token\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:19 crc kubenswrapper[4979]: I0130 21:46:19.947494 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt2bd\" (UniqueName: \"kubernetes.io/projected/2d8b6b0a-955d-42cb-a277-3018daf971ad-kube-api-access-kt2bd\") pod \"image-registry-66df7c8f76-tsxkg\" (UID: \"2d8b6b0a-955d-42cb-a277-3018daf971ad\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:20 crc kubenswrapper[4979]: I0130 21:46:20.107517 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:20 crc kubenswrapper[4979]: I0130 21:46:20.520946 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsxkg"] Jan 30 21:46:20 crc kubenswrapper[4979]: W0130 21:46:20.526416 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d8b6b0a_955d_42cb_a277_3018daf971ad.slice/crio-3d8f772f22032f303039c1b3d851a135dae46274d831289d039ebe2ecf90ce4f WatchSource:0}: Error finding container 3d8f772f22032f303039c1b3d851a135dae46274d831289d039ebe2ecf90ce4f: Status 404 returned error can't find the container with id 3d8f772f22032f303039c1b3d851a135dae46274d831289d039ebe2ecf90ce4f Jan 30 21:46:20 crc kubenswrapper[4979]: I0130 21:46:20.976111 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" event={"ID":"2d8b6b0a-955d-42cb-a277-3018daf971ad","Type":"ContainerStarted","Data":"1b4c1c9f355a5b85021efd1353e93fe5a460913ea006314018aa6be4bea6033f"} Jan 30 21:46:20 crc kubenswrapper[4979]: I0130 21:46:20.976215 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" event={"ID":"2d8b6b0a-955d-42cb-a277-3018daf971ad","Type":"ContainerStarted","Data":"3d8f772f22032f303039c1b3d851a135dae46274d831289d039ebe2ecf90ce4f"} Jan 30 21:46:21 crc kubenswrapper[4979]: I0130 21:46:21.984295 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:32 crc kubenswrapper[4979]: I0130 21:46:32.040379 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:46:32 crc kubenswrapper[4979]: I0130 21:46:32.041118 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:46:39 crc kubenswrapper[4979]: I0130 21:46:39.853735 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" podUID="de06742d-2533-4510-abec-ff0f35d84a45" containerName="oauth-openshift" containerID="cri-o://81e7ddaae02978ad5a7b5198e13bc3adaa3cfa27db9552a23b00db36df2ba57d" gracePeriod=15 Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.107145 4979 generic.go:334] "Generic (PLEG): container finished" podID="de06742d-2533-4510-abec-ff0f35d84a45" containerID="81e7ddaae02978ad5a7b5198e13bc3adaa3cfa27db9552a23b00db36df2ba57d" exitCode=0 Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.107276 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" event={"ID":"de06742d-2533-4510-abec-ff0f35d84a45","Type":"ContainerDied","Data":"81e7ddaae02978ad5a7b5198e13bc3adaa3cfa27db9552a23b00db36df2ba57d"} Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.113142 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.134100 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tsxkg" podStartSLOduration=21.134077713 podStartE2EDuration="21.134077713s" podCreationTimestamp="2026-01-30 21:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:46:20.994124512 +0000 UTC m=+376.955371565" watchObservedRunningTime="2026-01-30 21:46:40.134077713 +0000 UTC m=+396.095324746" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.169204 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.311596 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.354676 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5586db8965-x5tfp"] Jan 30 21:46:40 crc kubenswrapper[4979]: E0130 21:46:40.355009 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de06742d-2533-4510-abec-ff0f35d84a45" containerName="oauth-openshift" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.355070 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="de06742d-2533-4510-abec-ff0f35d84a45" containerName="oauth-openshift" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.355174 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="de06742d-2533-4510-abec-ff0f35d84a45" containerName="oauth-openshift" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.355716 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.358726 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5586db8965-x5tfp"] Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.462468 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.462530 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.462643 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463409 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463746 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463765 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463790 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463827 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463892 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.463751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.464409 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.464488 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.464539 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.464569 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465178 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465186 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465219 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465294 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465665 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sztff\" (UniqueName: \"kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff\") pod \"de06742d-2533-4510-abec-ff0f35d84a45\" (UID: \"de06742d-2533-4510-abec-ff0f35d84a45\") " Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-login\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465932 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/660a3a75-e96c-432c-80b8-aea9a9382317-audit-dir\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.465972 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466054 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466084 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-session\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466111 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwp6p\" (UniqueName: \"kubernetes.io/projected/660a3a75-e96c-432c-80b8-aea9a9382317-kube-api-access-fwp6p\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466194 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466249 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-error\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466273 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-audit-policies\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466303 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466326 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-service-ca\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466349 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466453 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466487 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-router-certs\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466552 4979 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466569 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466585 4979 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de06742d-2533-4510-abec-ff0f35d84a45-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466598 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.466611 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.469445 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff" (OuterVolumeSpecName: "kube-api-access-sztff") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "kube-api-access-sztff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.469850 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.470189 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.471154 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.471321 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.471496 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.471980 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.472200 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.472637 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "de06742d-2533-4510-abec-ff0f35d84a45" (UID: "de06742d-2533-4510-abec-ff0f35d84a45"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.567889 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/660a3a75-e96c-432c-80b8-aea9a9382317-audit-dir\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.567992 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568045 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568077 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-session\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568106 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwp6p\" (UniqueName: \"kubernetes.io/projected/660a3a75-e96c-432c-80b8-aea9a9382317-kube-api-access-fwp6p\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568110 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/660a3a75-e96c-432c-80b8-aea9a9382317-audit-dir\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568142 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568324 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-audit-policies\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568364 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-error\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568425 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568457 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-service-ca\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568490 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568622 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568662 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-router-certs\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568711 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-login\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568839 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568860 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568886 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568905 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568924 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568944 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568963 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.568986 4979 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de06742d-2533-4510-abec-ff0f35d84a45-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.569009 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sztff\" (UniqueName: \"kubernetes.io/projected/de06742d-2533-4510-abec-ff0f35d84a45-kube-api-access-sztff\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.569173 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-audit-policies\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.569523 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.569559 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.570104 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-service-ca\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.571879 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-session\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.572542 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-router-certs\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.573731 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.574459 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.575650 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.575974 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.582168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-login\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.584903 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/660a3a75-e96c-432c-80b8-aea9a9382317-v4-0-config-user-template-error\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.602988 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwp6p\" (UniqueName: \"kubernetes.io/projected/660a3a75-e96c-432c-80b8-aea9a9382317-kube-api-access-fwp6p\") pod \"oauth-openshift-5586db8965-x5tfp\" (UID: \"660a3a75-e96c-432c-80b8-aea9a9382317\") " pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:40 crc kubenswrapper[4979]: I0130 21:46:40.670955 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.095081 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5586db8965-x5tfp"] Jan 30 21:46:41 crc kubenswrapper[4979]: W0130 21:46:41.109310 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod660a3a75_e96c_432c_80b8_aea9a9382317.slice/crio-2c188556c11d49caaf8ecc02cde95f918d4ad22807fed37ad3164d1dc9bc23b1 WatchSource:0}: Error finding container 2c188556c11d49caaf8ecc02cde95f918d4ad22807fed37ad3164d1dc9bc23b1: Status 404 returned error can't find the container with id 2c188556c11d49caaf8ecc02cde95f918d4ad22807fed37ad3164d1dc9bc23b1 Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.116519 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" event={"ID":"de06742d-2533-4510-abec-ff0f35d84a45","Type":"ContainerDied","Data":"246d40c550fcc6c9fdc34ebbfdb6355e89a001f7901886dab00180fbdbb32fa5"} Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.116604 4979 scope.go:117] "RemoveContainer" containerID="81e7ddaae02978ad5a7b5198e13bc3adaa3cfa27db9552a23b00db36df2ba57d" Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.116666 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pq8k" Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.119653 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" event={"ID":"660a3a75-e96c-432c-80b8-aea9a9382317","Type":"ContainerStarted","Data":"2c188556c11d49caaf8ecc02cde95f918d4ad22807fed37ad3164d1dc9bc23b1"} Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.172978 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:46:41 crc kubenswrapper[4979]: I0130 21:46:41.178585 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pq8k"] Jan 30 21:46:42 crc kubenswrapper[4979]: I0130 21:46:42.131806 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" event={"ID":"660a3a75-e96c-432c-80b8-aea9a9382317","Type":"ContainerStarted","Data":"77d29596ccc62748fe997b8be8cfbb321d441df710b36231756b3dfa47b1500c"} Jan 30 21:46:42 crc kubenswrapper[4979]: I0130 21:46:42.132491 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:42 crc kubenswrapper[4979]: I0130 21:46:42.140347 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" Jan 30 21:46:42 crc kubenswrapper[4979]: I0130 21:46:42.163108 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5586db8965-x5tfp" podStartSLOduration=28.163080305 podStartE2EDuration="28.163080305s" podCreationTimestamp="2026-01-30 21:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:46:42.159292574 +0000 UTC m=+398.120539637" watchObservedRunningTime="2026-01-30 21:46:42.163080305 +0000 UTC m=+398.124327368" Jan 30 21:46:43 crc kubenswrapper[4979]: I0130 21:46:43.079549 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de06742d-2533-4510-abec-ff0f35d84a45" path="/var/lib/kubelet/pods/de06742d-2533-4510-abec-ff0f35d84a45/volumes" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.185568 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.186507 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dk444" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="registry-server" containerID="cri-o://951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631" gracePeriod=30 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.191656 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.191925 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-krrkl" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="registry-server" containerID="cri-o://165fe5bf1fc47247f3d6114846a10d0f59102aaf37fc99f103ab83026418760f" gracePeriod=30 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.202925 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.203168 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" containerID="cri-o://585161ecfcfec9bab6e3f6343cc5b39fbcc29e68b0b21ee9c50d8350eb065d80" gracePeriod=30 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.215471 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.215749 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wjwlb" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="registry-server" containerID="cri-o://66b10ec48352a0a5598a324fadbde93f516e9ce5018944e53e2f4c6a14a933a7" gracePeriod=30 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.222752 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nzltj"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.225360 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.232306 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.232662 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2tvd8" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="registry-server" containerID="cri-o://e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777" gracePeriod=30 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.237232 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nzltj"] Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.304122 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.304186 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtq5q\" (UniqueName: \"kubernetes.io/projected/ea935cc6-1adc-4763-bf1c-8c08fec3894f-kube-api-access-gtq5q\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.304245 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.406198 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.406379 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.406408 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtq5q\" (UniqueName: \"kubernetes.io/projected/ea935cc6-1adc-4763-bf1c-8c08fec3894f-kube-api-access-gtq5q\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.407659 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.415254 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ea935cc6-1adc-4763-bf1c-8c08fec3894f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.425716 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtq5q\" (UniqueName: \"kubernetes.io/projected/ea935cc6-1adc-4763-bf1c-8c08fec3894f-kube-api-access-gtq5q\") pod \"marketplace-operator-79b997595-nzltj\" (UID: \"ea935cc6-1adc-4763-bf1c-8c08fec3894f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.545874 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.700256 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.813439 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities\") pod \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.813596 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmtm\" (UniqueName: \"kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm\") pod \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.813650 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content\") pod \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\" (UID: \"3641ad73-644b-4d71-860b-4d8b7e6a3a6d\") " Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.814465 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities" (OuterVolumeSpecName: "utilities") pod "3641ad73-644b-4d71-860b-4d8b7e6a3a6d" (UID: "3641ad73-644b-4d71-860b-4d8b7e6a3a6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.817271 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm" (OuterVolumeSpecName: "kube-api-access-nmmtm") pod "3641ad73-644b-4d71-860b-4d8b7e6a3a6d" (UID: "3641ad73-644b-4d71-860b-4d8b7e6a3a6d"). InnerVolumeSpecName "kube-api-access-nmmtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.915110 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.915145 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmmtm\" (UniqueName: \"kubernetes.io/projected/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-kube-api-access-nmmtm\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.944758 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3641ad73-644b-4d71-860b-4d8b7e6a3a6d" (UID: "3641ad73-644b-4d71-860b-4d8b7e6a3a6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:58 crc kubenswrapper[4979]: W0130 21:46:58.948889 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea935cc6_1adc_4763_bf1c_8c08fec3894f.slice/crio-f00b61820427bdbaa5b18b86e97e055cd5d8cac8a1b2d23c45240b3f6bd0a033 WatchSource:0}: Error finding container f00b61820427bdbaa5b18b86e97e055cd5d8cac8a1b2d23c45240b3f6bd0a033: Status 404 returned error can't find the container with id f00b61820427bdbaa5b18b86e97e055cd5d8cac8a1b2d23c45240b3f6bd0a033 Jan 30 21:46:58 crc kubenswrapper[4979]: I0130 21:46:58.950404 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nzltj"] Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.016879 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3641ad73-644b-4d71-860b-4d8b7e6a3a6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.237682 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.238037 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" event={"ID":"15489ac0-9ae3-4068-973c-fd1ea98642c3","Type":"ContainerDied","Data":"585161ecfcfec9bab6e3f6343cc5b39fbcc29e68b0b21ee9c50d8350eb065d80"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.238003 4979 generic.go:334] "Generic (PLEG): container finished" podID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerID="585161ecfcfec9bab6e3f6343cc5b39fbcc29e68b0b21ee9c50d8350eb065d80" exitCode=0 Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.243024 4979 generic.go:334] "Generic (PLEG): container finished" podID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerID="e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777" exitCode=0 Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.243172 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerDied","Data":"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.243176 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvd8" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.243208 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvd8" event={"ID":"3641ad73-644b-4d71-860b-4d8b7e6a3a6d","Type":"ContainerDied","Data":"2225585b885540daf5c8798c55ba2f9f3246f245430840cea94336a10b265b9b"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.243231 4979 scope.go:117] "RemoveContainer" containerID="e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.258573 4979 generic.go:334] "Generic (PLEG): container finished" podID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerID="66b10ec48352a0a5598a324fadbde93f516e9ce5018944e53e2f4c6a14a933a7" exitCode=0 Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.258664 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerDied","Data":"66b10ec48352a0a5598a324fadbde93f516e9ce5018944e53e2f4c6a14a933a7"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.262811 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerID="951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631" exitCode=0 Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.262889 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerDied","Data":"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.262919 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk444" event={"ID":"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8","Type":"ContainerDied","Data":"295443fe09756d263200da5b0351f58fb651db4b6823dfb3399c5cfb72b8ea20"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.262998 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk444" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.265691 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" event={"ID":"ea935cc6-1adc-4763-bf1c-8c08fec3894f","Type":"ContainerStarted","Data":"f00b61820427bdbaa5b18b86e97e055cd5d8cac8a1b2d23c45240b3f6bd0a033"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.280132 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.282758 4979 generic.go:334] "Generic (PLEG): container finished" podID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerID="165fe5bf1fc47247f3d6114846a10d0f59102aaf37fc99f103ab83026418760f" exitCode=0 Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.282780 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerDied","Data":"165fe5bf1fc47247f3d6114846a10d0f59102aaf37fc99f103ab83026418760f"} Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.284634 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2tvd8"] Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.291325 4979 scope.go:117] "RemoveContainer" containerID="c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.322198 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities\") pod \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.322273 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content\") pod \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.322360 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rtgh\" (UniqueName: \"kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh\") pod \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\" (UID: \"6ceea51c-f0b8-4de3-be53-f1d857b3a1b8\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.323053 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities" (OuterVolumeSpecName: "utilities") pod "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" (UID: "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.329860 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh" (OuterVolumeSpecName: "kube-api-access-2rtgh") pod "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" (UID: "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8"). InnerVolumeSpecName "kube-api-access-2rtgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.383977 4979 scope.go:117] "RemoveContainer" containerID="f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.403474 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" (UID: "6ceea51c-f0b8-4de3-be53-f1d857b3a1b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.413945 4979 scope.go:117] "RemoveContainer" containerID="e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.414506 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777\": container with ID starting with e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777 not found: ID does not exist" containerID="e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.414548 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777"} err="failed to get container status \"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777\": rpc error: code = NotFound desc = could not find container \"e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777\": container with ID starting with e73000c52980d054cd7689a9e42eec1b21af705b1bd31b3e4700e64dcd096777 not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.414570 4979 scope.go:117] "RemoveContainer" containerID="c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.415384 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126\": container with ID starting with c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126 not found: ID does not exist" containerID="c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.415413 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126"} err="failed to get container status \"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126\": rpc error: code = NotFound desc = could not find container \"c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126\": container with ID starting with c75c8fd6372b5e5fba9e54d360717eeb7d452b6560d36fce44d9007287e51126 not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.415426 4979 scope.go:117] "RemoveContainer" containerID="f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.415801 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c\": container with ID starting with f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c not found: ID does not exist" containerID="f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.415821 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c"} err="failed to get container status \"f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c\": rpc error: code = NotFound desc = could not find container \"f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c\": container with ID starting with f83307be0b522bc1c873ea7cb56fecc3a4eaac21f1823e782916699de95f2a4c not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.415833 4979 scope.go:117] "RemoveContainer" containerID="951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.423924 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rtgh\" (UniqueName: \"kubernetes.io/projected/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-kube-api-access-2rtgh\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.423956 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.423965 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.451183 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.456992 4979 scope.go:117] "RemoveContainer" containerID="4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.461230 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.479016 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.488414 4979 scope.go:117] "RemoveContainer" containerID="d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.512714 4979 scope.go:117] "RemoveContainer" containerID="951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.513838 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631\": container with ID starting with 951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631 not found: ID does not exist" containerID="951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.513888 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631"} err="failed to get container status \"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631\": rpc error: code = NotFound desc = could not find container \"951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631\": container with ID starting with 951d9b7b450cdb9a78ddd0a2044f59412ae5e7ca643761b0553be72dec3f8631 not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.513918 4979 scope.go:117] "RemoveContainer" containerID="4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.514255 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6\": container with ID starting with 4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6 not found: ID does not exist" containerID="4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.514280 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6"} err="failed to get container status \"4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6\": rpc error: code = NotFound desc = could not find container \"4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6\": container with ID starting with 4ca8216ba340db5b940f6b2dfbc066faa1b0574b211f71a750c0f39bc2d2dbc6 not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.514293 4979 scope.go:117] "RemoveContainer" containerID="d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1" Jan 30 21:46:59 crc kubenswrapper[4979]: E0130 21:46:59.514507 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1\": container with ID starting with d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1 not found: ID does not exist" containerID="d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.514534 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1"} err="failed to get container status \"d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1\": rpc error: code = NotFound desc = could not find container \"d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1\": container with ID starting with d300a47fbe21d60a591eef43bf7ca12d7e4ec2e73a9c2f121fb840eb9e60f8c1 not found: ID does not exist" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525257 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zltvn\" (UniqueName: \"kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn\") pod \"15489ac0-9ae3-4068-973c-fd1ea98642c3\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525379 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nls66\" (UniqueName: \"kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66\") pod \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525411 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content\") pod \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525471 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snrx6\" (UniqueName: \"kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6\") pod \"9ced41eb-6843-4dfe-81c7-267a56f75a73\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525569 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content\") pod \"9ced41eb-6843-4dfe-81c7-267a56f75a73\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525591 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca\") pod \"15489ac0-9ae3-4068-973c-fd1ea98642c3\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525635 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities\") pod \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\" (UID: \"cfb214a7-6df6-4fd6-a74c-db4f38b0a086\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525662 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics\") pod \"15489ac0-9ae3-4068-973c-fd1ea98642c3\" (UID: \"15489ac0-9ae3-4068-973c-fd1ea98642c3\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.525700 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities\") pod \"9ced41eb-6843-4dfe-81c7-267a56f75a73\" (UID: \"9ced41eb-6843-4dfe-81c7-267a56f75a73\") " Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.526265 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "15489ac0-9ae3-4068-973c-fd1ea98642c3" (UID: "15489ac0-9ae3-4068-973c-fd1ea98642c3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.526548 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities" (OuterVolumeSpecName: "utilities") pod "cfb214a7-6df6-4fd6-a74c-db4f38b0a086" (UID: "cfb214a7-6df6-4fd6-a74c-db4f38b0a086"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.526641 4979 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.527287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities" (OuterVolumeSpecName: "utilities") pod "9ced41eb-6843-4dfe-81c7-267a56f75a73" (UID: "9ced41eb-6843-4dfe-81c7-267a56f75a73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.528591 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66" (OuterVolumeSpecName: "kube-api-access-nls66") pod "cfb214a7-6df6-4fd6-a74c-db4f38b0a086" (UID: "cfb214a7-6df6-4fd6-a74c-db4f38b0a086"). InnerVolumeSpecName "kube-api-access-nls66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.528854 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn" (OuterVolumeSpecName: "kube-api-access-zltvn") pod "15489ac0-9ae3-4068-973c-fd1ea98642c3" (UID: "15489ac0-9ae3-4068-973c-fd1ea98642c3"). InnerVolumeSpecName "kube-api-access-zltvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.529007 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6" (OuterVolumeSpecName: "kube-api-access-snrx6") pod "9ced41eb-6843-4dfe-81c7-267a56f75a73" (UID: "9ced41eb-6843-4dfe-81c7-267a56f75a73"). InnerVolumeSpecName "kube-api-access-snrx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.529187 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "15489ac0-9ae3-4068-973c-fd1ea98642c3" (UID: "15489ac0-9ae3-4068-973c-fd1ea98642c3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.546623 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfb214a7-6df6-4fd6-a74c-db4f38b0a086" (UID: "cfb214a7-6df6-4fd6-a74c-db4f38b0a086"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.592198 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.595292 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dk444"] Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.614017 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ced41eb-6843-4dfe-81c7-267a56f75a73" (UID: "9ced41eb-6843-4dfe-81c7-267a56f75a73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628589 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628636 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628650 4979 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/15489ac0-9ae3-4068-973c-fd1ea98642c3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628666 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ced41eb-6843-4dfe-81c7-267a56f75a73-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628681 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zltvn\" (UniqueName: \"kubernetes.io/projected/15489ac0-9ae3-4068-973c-fd1ea98642c3-kube-api-access-zltvn\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628693 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nls66\" (UniqueName: \"kubernetes.io/projected/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-kube-api-access-nls66\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628704 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfb214a7-6df6-4fd6-a74c-db4f38b0a086-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:46:59 crc kubenswrapper[4979]: I0130 21:46:59.628716 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snrx6\" (UniqueName: \"kubernetes.io/projected/9ced41eb-6843-4dfe-81c7-267a56f75a73-kube-api-access-snrx6\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.291233 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" event={"ID":"ea935cc6-1adc-4763-bf1c-8c08fec3894f","Type":"ContainerStarted","Data":"a98a4834f147d0c9448522daffe2683971e336e5c5349c1eb38bc8863c0ae3ef"} Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.292604 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.296316 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.297974 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krrkl" event={"ID":"9ced41eb-6843-4dfe-81c7-267a56f75a73","Type":"ContainerDied","Data":"ef80ed7d6ea466150a57b7d4595c84c46d03f43e54dcb40334059a4c99c74be3"} Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.298072 4979 scope.go:117] "RemoveContainer" containerID="165fe5bf1fc47247f3d6114846a10d0f59102aaf37fc99f103ab83026418760f" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.298264 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krrkl" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.313612 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nzltj" podStartSLOduration=2.313590583 podStartE2EDuration="2.313590583s" podCreationTimestamp="2026-01-30 21:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:47:00.310759386 +0000 UTC m=+416.272006419" watchObservedRunningTime="2026-01-30 21:47:00.313590583 +0000 UTC m=+416.274837606" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.318153 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" event={"ID":"15489ac0-9ae3-4068-973c-fd1ea98642c3","Type":"ContainerDied","Data":"77916c27a3bed0009808e06c73482e7ba563d922fb5c460a56269b992ef94952"} Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.318277 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lzp5" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.325926 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjwlb" event={"ID":"cfb214a7-6df6-4fd6-a74c-db4f38b0a086","Type":"ContainerDied","Data":"69b34253c166acfc981a0414523d053e63aae7c6e06110f5fe68cf8028008964"} Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.326017 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjwlb" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.340948 4979 scope.go:117] "RemoveContainer" containerID="9c8374b15b5619f4f1304cf75cea07e98769e40d36978831645aa6ad442f9748" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.370746 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.375177 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-krrkl"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.399026 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.405282 4979 scope.go:117] "RemoveContainer" containerID="ac193c08f8b37b1caaa0e8f2fd6642d2080bfcadd0f1988fbb608a5fad551f06" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.406222 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lzp5"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.420879 4979 scope.go:117] "RemoveContainer" containerID="585161ecfcfec9bab6e3f6343cc5b39fbcc29e68b0b21ee9c50d8350eb065d80" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.425581 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.429510 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjwlb"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.438680 4979 scope.go:117] "RemoveContainer" containerID="66b10ec48352a0a5598a324fadbde93f516e9ce5018944e53e2f4c6a14a933a7" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.453494 4979 scope.go:117] "RemoveContainer" containerID="6777c7a712aaeb3b92c712ea13c14e93a0636f80d815df1f08df98f2e3cc68fe" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.468803 4979 scope.go:117] "RemoveContainer" containerID="79a85f996439ff844121a3f1030805086e2c3395fd9f9a97d7660f7b7319ecdd" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.590866 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591122 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591136 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591149 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591155 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591169 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591175 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591185 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591190 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591199 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591204 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591211 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591217 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591225 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591233 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591244 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591250 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591258 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591265 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591273 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591279 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591288 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591293 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591300 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591305 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="extract-utilities" Jan 30 21:47:00 crc kubenswrapper[4979]: E0130 21:47:00.591313 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591319 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="extract-content" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591423 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591434 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" containerName="marketplace-operator" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591440 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591450 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.591461 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" containerName="registry-server" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.592278 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.596679 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.599820 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.641762 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.641828 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.641858 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mp8q\" (UniqueName: \"kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.742834 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mp8q\" (UniqueName: \"kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.742950 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.743006 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.743612 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.743805 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.761358 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mp8q\" (UniqueName: \"kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q\") pod \"certified-operators-mvj6v\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:00 crc kubenswrapper[4979]: I0130 21:47:00.907683 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.095448 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15489ac0-9ae3-4068-973c-fd1ea98642c3" path="/var/lib/kubelet/pods/15489ac0-9ae3-4068-973c-fd1ea98642c3/volumes" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.096540 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3641ad73-644b-4d71-860b-4d8b7e6a3a6d" path="/var/lib/kubelet/pods/3641ad73-644b-4d71-860b-4d8b7e6a3a6d/volumes" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.097219 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ceea51c-f0b8-4de3-be53-f1d857b3a1b8" path="/var/lib/kubelet/pods/6ceea51c-f0b8-4de3-be53-f1d857b3a1b8/volumes" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.098359 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ced41eb-6843-4dfe-81c7-267a56f75a73" path="/var/lib/kubelet/pods/9ced41eb-6843-4dfe-81c7-267a56f75a73/volumes" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.099018 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfb214a7-6df6-4fd6-a74c-db4f38b0a086" path="/var/lib/kubelet/pods/cfb214a7-6df6-4fd6-a74c-db4f38b0a086/volumes" Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.301350 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 21:47:01 crc kubenswrapper[4979]: I0130 21:47:01.335020 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerStarted","Data":"839a0e21c6342d6c49c0683bac9adda801e1ebfd8079dc25226f6fa62891ca90"} Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.039479 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.039539 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.039591 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.040299 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.040383 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b" gracePeriod=600 Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.344907 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b" exitCode=0 Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.344991 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b"} Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.345284 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43"} Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.345308 4979 scope.go:117] "RemoveContainer" containerID="92bb7cca6da53077db78f23df2635498723ede984481aeb42776383900f22d1d" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.359723 4979 generic.go:334] "Generic (PLEG): container finished" podID="135dc03e-075f-41a4-934c-8d914d497f69" containerID="2775cfa6f3efbca70770c0157c242e36a5de365efbaf9c6628031b3077d49317" exitCode=0 Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.359849 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerDied","Data":"2775cfa6f3efbca70770c0157c242e36a5de365efbaf9c6628031b3077d49317"} Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.395025 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k8s6x"] Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.402544 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.405567 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.413057 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8s6x"] Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.466832 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-utilities\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.467259 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-catalog-content\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.467377 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2np9\" (UniqueName: \"kubernetes.io/projected/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-kube-api-access-t2np9\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.568475 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-catalog-content\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.568942 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2np9\" (UniqueName: \"kubernetes.io/projected/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-kube-api-access-t2np9\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.569008 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-utilities\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.569487 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-catalog-content\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.569545 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-utilities\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.592874 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2np9\" (UniqueName: \"kubernetes.io/projected/ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d-kube-api-access-t2np9\") pod \"redhat-operators-k8s6x\" (UID: \"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d\") " pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.720657 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.996814 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wfnsx"] Jan 30 21:47:02 crc kubenswrapper[4979]: I0130 21:47:02.998292 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.001142 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.002026 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfnsx"] Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.079157 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-utilities\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.079246 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb4md\" (UniqueName: \"kubernetes.io/projected/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-kube-api-access-xb4md\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.079286 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-catalog-content\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.115923 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8s6x"] Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.180848 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-catalog-content\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.180900 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-utilities\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.180981 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb4md\" (UniqueName: \"kubernetes.io/projected/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-kube-api-access-xb4md\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.181467 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-catalog-content\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.181522 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-utilities\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.202192 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb4md\" (UniqueName: \"kubernetes.io/projected/eb5ba6de-4ef3-49a4-bd09-1ca00d210025-kube-api-access-xb4md\") pod \"community-operators-wfnsx\" (UID: \"eb5ba6de-4ef3-49a4-bd09-1ca00d210025\") " pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.315506 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.365780 4979 generic.go:334] "Generic (PLEG): container finished" podID="ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d" containerID="7b99e97c5de516482b73f4c44ef1aba9c5e09ade1d5185a17072c1d139a4b9a5" exitCode=0 Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.365859 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8s6x" event={"ID":"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d","Type":"ContainerDied","Data":"7b99e97c5de516482b73f4c44ef1aba9c5e09ade1d5185a17072c1d139a4b9a5"} Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.365884 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8s6x" event={"ID":"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d","Type":"ContainerStarted","Data":"708050d220016c9693a3f7d1f85a1117831ba0120073a81316a37b568c72fe7f"} Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.368556 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerStarted","Data":"d404bfe67ff421181512f1fd0ec9b497604ce89b019eae22246b17cef4cbd11a"} Jan 30 21:47:03 crc kubenswrapper[4979]: W0130 21:47:03.763360 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb5ba6de_4ef3_49a4_bd09_1ca00d210025.slice/crio-8c7ecd5004045341b8967ecea5f9fda566d9d49ca5cf27fe06c5b27e91c5a8c8 WatchSource:0}: Error finding container 8c7ecd5004045341b8967ecea5f9fda566d9d49ca5cf27fe06c5b27e91c5a8c8: Status 404 returned error can't find the container with id 8c7ecd5004045341b8967ecea5f9fda566d9d49ca5cf27fe06c5b27e91c5a8c8 Jan 30 21:47:03 crc kubenswrapper[4979]: I0130 21:47:03.765624 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfnsx"] Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.381772 4979 generic.go:334] "Generic (PLEG): container finished" podID="eb5ba6de-4ef3-49a4-bd09-1ca00d210025" containerID="f37e167a660c612d1348cb7e55c35fbb6038e87ee45d4ece748a0cf2d0fa1d4a" exitCode=0 Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.381845 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfnsx" event={"ID":"eb5ba6de-4ef3-49a4-bd09-1ca00d210025","Type":"ContainerDied","Data":"f37e167a660c612d1348cb7e55c35fbb6038e87ee45d4ece748a0cf2d0fa1d4a"} Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.381897 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfnsx" event={"ID":"eb5ba6de-4ef3-49a4-bd09-1ca00d210025","Type":"ContainerStarted","Data":"8c7ecd5004045341b8967ecea5f9fda566d9d49ca5cf27fe06c5b27e91c5a8c8"} Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.383842 4979 generic.go:334] "Generic (PLEG): container finished" podID="135dc03e-075f-41a4-934c-8d914d497f69" containerID="d404bfe67ff421181512f1fd0ec9b497604ce89b019eae22246b17cef4cbd11a" exitCode=0 Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.383903 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerDied","Data":"d404bfe67ff421181512f1fd0ec9b497604ce89b019eae22246b17cef4cbd11a"} Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.787834 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7jr2p"] Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.789188 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.791945 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.805155 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jr2p"] Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.809762 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-utilities\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.809814 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvwwk\" (UniqueName: \"kubernetes.io/projected/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-kube-api-access-gvwwk\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.809909 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-catalog-content\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.910953 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-utilities\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.911013 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvwwk\" (UniqueName: \"kubernetes.io/projected/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-kube-api-access-gvwwk\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.911089 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-catalog-content\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.911605 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-catalog-content\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.911878 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-utilities\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:04 crc kubenswrapper[4979]: I0130 21:47:04.938567 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvwwk\" (UniqueName: \"kubernetes.io/projected/f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9-kube-api-access-gvwwk\") pod \"redhat-marketplace-7jr2p\" (UID: \"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9\") " pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.112955 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.121667 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.221131 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" podUID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" containerName="registry" containerID="cri-o://772ed6de3e14868a31eee279f850d2d08ee72d544656a44996cff23085c636cb" gracePeriod=30 Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.390364 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8s6x" event={"ID":"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d","Type":"ContainerStarted","Data":"eeb154b4cda9a7e6e8b3aaefc06a886e9df015be60988d21b2794ff047bde9ff"} Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.392346 4979 generic.go:334] "Generic (PLEG): container finished" podID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" containerID="772ed6de3e14868a31eee279f850d2d08ee72d544656a44996cff23085c636cb" exitCode=0 Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.392403 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" event={"ID":"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08","Type":"ContainerDied","Data":"772ed6de3e14868a31eee279f850d2d08ee72d544656a44996cff23085c636cb"} Jan 30 21:47:05 crc kubenswrapper[4979]: I0130 21:47:05.510644 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jr2p"] Jan 30 21:47:05 crc kubenswrapper[4979]: W0130 21:47:05.518306 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ed839b_8a68_4f8d_b12b_dac0b2fae9d9.slice/crio-02eb621919cdca523111c9e77f139cf95329079b9c1fa1cf70ef4517fe3ef933 WatchSource:0}: Error finding container 02eb621919cdca523111c9e77f139cf95329079b9c1fa1cf70ef4517fe3ef933: Status 404 returned error can't find the container with id 02eb621919cdca523111c9e77f139cf95329079b9c1fa1cf70ef4517fe3ef933 Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.324612 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329197 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329237 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5jlk\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329308 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329339 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329363 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329397 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.329546 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\" (UID: \"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08\") " Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.330661 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.330680 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.354222 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.354941 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.354985 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk" (OuterVolumeSpecName: "kube-api-access-s5jlk") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "kube-api-access-s5jlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.355306 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.355532 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.357500 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" (UID: "43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.398884 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" event={"ID":"43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08","Type":"ContainerDied","Data":"7b232422461df3a64ba9f7d1e8e42a5bbd92a1d12e44b90cbcab93e3d93f6389"} Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.398942 4979 scope.go:117] "RemoveContainer" containerID="772ed6de3e14868a31eee279f850d2d08ee72d544656a44996cff23085c636cb" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.398948 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rvdlc" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.402111 4979 generic.go:334] "Generic (PLEG): container finished" podID="ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d" containerID="eeb154b4cda9a7e6e8b3aaefc06a886e9df015be60988d21b2794ff047bde9ff" exitCode=0 Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.402159 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8s6x" event={"ID":"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d","Type":"ContainerDied","Data":"eeb154b4cda9a7e6e8b3aaefc06a886e9df015be60988d21b2794ff047bde9ff"} Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.403846 4979 generic.go:334] "Generic (PLEG): container finished" podID="f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9" containerID="c9a57a1bac9c8bc957a5e2f3b7739ffde4af68eeff931e2133843031cbf28f88" exitCode=0 Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.403929 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jr2p" event={"ID":"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9","Type":"ContainerDied","Data":"c9a57a1bac9c8bc957a5e2f3b7739ffde4af68eeff931e2133843031cbf28f88"} Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.403963 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jr2p" event={"ID":"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9","Type":"ContainerStarted","Data":"02eb621919cdca523111c9e77f139cf95329079b9c1fa1cf70ef4517fe3ef933"} Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.406189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerStarted","Data":"987424a460c36bb8c4afbae895f5e17f696c5e1c101adee6c040d5a1d185626a"} Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432831 4979 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432894 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432911 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5jlk\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-kube-api-access-s5jlk\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432929 4979 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432941 4979 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432953 4979 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.432966 4979 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.466087 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mvj6v" podStartSLOduration=3.03537498 podStartE2EDuration="6.466062665s" podCreationTimestamp="2026-01-30 21:47:00 +0000 UTC" firstStartedPulling="2026-01-30 21:47:02.367540258 +0000 UTC m=+418.328787291" lastFinishedPulling="2026-01-30 21:47:05.798227943 +0000 UTC m=+421.759474976" observedRunningTime="2026-01-30 21:47:06.465084628 +0000 UTC m=+422.426331681" watchObservedRunningTime="2026-01-30 21:47:06.466062665 +0000 UTC m=+422.427309688" Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.485952 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:47:06 crc kubenswrapper[4979]: I0130 21:47:06.489704 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rvdlc"] Jan 30 21:47:07 crc kubenswrapper[4979]: I0130 21:47:07.164425 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" path="/var/lib/kubelet/pods/43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08/volumes" Jan 30 21:47:08 crc kubenswrapper[4979]: I0130 21:47:08.421259 4979 generic.go:334] "Generic (PLEG): container finished" podID="eb5ba6de-4ef3-49a4-bd09-1ca00d210025" containerID="c312300789c563f974e43fa424703d387daec72910ff9772f82e77c140ece03e" exitCode=0 Jan 30 21:47:08 crc kubenswrapper[4979]: I0130 21:47:08.421325 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfnsx" event={"ID":"eb5ba6de-4ef3-49a4-bd09-1ca00d210025","Type":"ContainerDied","Data":"c312300789c563f974e43fa424703d387daec72910ff9772f82e77c140ece03e"} Jan 30 21:47:08 crc kubenswrapper[4979]: I0130 21:47:08.426394 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8s6x" event={"ID":"ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d","Type":"ContainerStarted","Data":"0ea2b5bbf15e4bc7b0a93651ab8f32f9f56af3fb66e4f350462b29d8244055ef"} Jan 30 21:47:08 crc kubenswrapper[4979]: I0130 21:47:08.458928 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k8s6x" podStartSLOduration=1.929896502 podStartE2EDuration="6.458903423s" podCreationTimestamp="2026-01-30 21:47:02 +0000 UTC" firstStartedPulling="2026-01-30 21:47:03.368848508 +0000 UTC m=+419.330095541" lastFinishedPulling="2026-01-30 21:47:07.897855429 +0000 UTC m=+423.859102462" observedRunningTime="2026-01-30 21:47:08.455621885 +0000 UTC m=+424.416868918" watchObservedRunningTime="2026-01-30 21:47:08.458903423 +0000 UTC m=+424.420150456" Jan 30 21:47:10 crc kubenswrapper[4979]: I0130 21:47:10.908684 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:10 crc kubenswrapper[4979]: I0130 21:47:10.909284 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:10 crc kubenswrapper[4979]: I0130 21:47:10.947827 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.147480 4979 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podde06742d-2533-4510-abec-ff0f35d84a45"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podde06742d-2533-4510-abec-ff0f35d84a45] : Timed out while waiting for systemd to remove kubepods-burstable-podde06742d_2533_4510_abec_ff0f35d84a45.slice" Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.444485 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfnsx" event={"ID":"eb5ba6de-4ef3-49a4-bd09-1ca00d210025","Type":"ContainerStarted","Data":"7739c01b720ac89e7854c2e1880d8a3f2cf3ec49342814a15cddb7299ac74dd2"} Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.446487 4979 generic.go:334] "Generic (PLEG): container finished" podID="f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9" containerID="1f33be4ed9755d85029de88fd7a0300d054960f16f952a88399c3e320da1c161" exitCode=0 Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.446689 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jr2p" event={"ID":"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9","Type":"ContainerDied","Data":"1f33be4ed9755d85029de88fd7a0300d054960f16f952a88399c3e320da1c161"} Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.471662 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wfnsx" podStartSLOduration=4.276373873 podStartE2EDuration="9.471637223s" podCreationTimestamp="2026-01-30 21:47:02 +0000 UTC" firstStartedPulling="2026-01-30 21:47:04.401891965 +0000 UTC m=+420.363138998" lastFinishedPulling="2026-01-30 21:47:09.597155315 +0000 UTC m=+425.558402348" observedRunningTime="2026-01-30 21:47:11.467949853 +0000 UTC m=+427.429196886" watchObservedRunningTime="2026-01-30 21:47:11.471637223 +0000 UTC m=+427.432884286" Jan 30 21:47:11 crc kubenswrapper[4979]: I0130 21:47:11.505097 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 21:47:12 crc kubenswrapper[4979]: I0130 21:47:12.721502 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:12 crc kubenswrapper[4979]: I0130 21:47:12.721903 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.316146 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.316215 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.362286 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.457440 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jr2p" event={"ID":"f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9","Type":"ContainerStarted","Data":"32130af28b1071620871ecc1045d52462bb2aff28cda2c02f94f3d183a6bc005"} Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.480259 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7jr2p" podStartSLOduration=3.664410297 podStartE2EDuration="9.480238255s" podCreationTimestamp="2026-01-30 21:47:04 +0000 UTC" firstStartedPulling="2026-01-30 21:47:06.876631222 +0000 UTC m=+422.837878255" lastFinishedPulling="2026-01-30 21:47:12.69245918 +0000 UTC m=+428.653706213" observedRunningTime="2026-01-30 21:47:13.47264137 +0000 UTC m=+429.433888403" watchObservedRunningTime="2026-01-30 21:47:13.480238255 +0000 UTC m=+429.441485308" Jan 30 21:47:13 crc kubenswrapper[4979]: I0130 21:47:13.793192 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k8s6x" podUID="ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d" containerName="registry-server" probeResult="failure" output=< Jan 30 21:47:13 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 21:47:13 crc kubenswrapper[4979]: > Jan 30 21:47:15 crc kubenswrapper[4979]: I0130 21:47:15.122806 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:15 crc kubenswrapper[4979]: I0130 21:47:15.123144 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:15 crc kubenswrapper[4979]: I0130 21:47:15.165821 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:47:22 crc kubenswrapper[4979]: I0130 21:47:22.772692 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:22 crc kubenswrapper[4979]: I0130 21:47:22.825494 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k8s6x" Jan 30 21:47:23 crc kubenswrapper[4979]: I0130 21:47:23.363218 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wfnsx" Jan 30 21:47:25 crc kubenswrapper[4979]: I0130 21:47:25.198679 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7jr2p" Jan 30 21:49:02 crc kubenswrapper[4979]: I0130 21:49:02.039858 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:49:02 crc kubenswrapper[4979]: I0130 21:49:02.040705 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:49:05 crc kubenswrapper[4979]: I0130 21:49:05.424912 4979 scope.go:117] "RemoveContainer" containerID="1a95ca4d3d52fa45ac0c03598e04f51654e2ae85b01f82e3a46a20846a9d630c" Jan 30 21:49:05 crc kubenswrapper[4979]: I0130 21:49:05.454806 4979 scope.go:117] "RemoveContainer" containerID="0e6f69cd4614a1bb62b39b70bbd49625b932e4c6dcb736053a2748eac81dda1e" Jan 30 21:49:32 crc kubenswrapper[4979]: I0130 21:49:32.040204 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:49:32 crc kubenswrapper[4979]: I0130 21:49:32.041134 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.040311 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.041219 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.041315 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.041974 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.042069 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43" gracePeriod=600 Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.883413 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43" exitCode=0 Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.883512 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43"} Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.884434 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222"} Jan 30 21:50:02 crc kubenswrapper[4979]: I0130 21:50:02.884480 4979 scope.go:117] "RemoveContainer" containerID="1d5308deb4fb750f100d625c67d41f0e4ff6f56c501723aebe861edc5dea525b" Jan 30 21:52:02 crc kubenswrapper[4979]: I0130 21:52:02.039516 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:52:02 crc kubenswrapper[4979]: I0130 21:52:02.041194 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:52:32 crc kubenswrapper[4979]: I0130 21:52:32.040180 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:52:32 crc kubenswrapper[4979]: I0130 21:52:32.041172 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:52:40 crc kubenswrapper[4979]: I0130 21:52:40.875595 4979 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.039521 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.040593 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.040685 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.041826 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.041938 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222" gracePeriod=600 Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.246706 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222" exitCode=0 Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.246920 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222"} Jan 30 21:53:02 crc kubenswrapper[4979]: I0130 21:53:02.247322 4979 scope.go:117] "RemoveContainer" containerID="de200c01a74c734df60d272ffbf006cff1b226d077b5d5ae12ed63d78d99ee43" Jan 30 21:53:03 crc kubenswrapper[4979]: I0130 21:53:03.265254 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.162191 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jttsv"] Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.167856 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-controller" containerID="cri-o://2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.167891 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="nbdb" containerID="cri-o://0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.168016 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="sbdb" containerID="cri-o://47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.168099 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="northd" containerID="cri-o://ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.168157 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.168204 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-node" containerID="cri-o://c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.168280 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-acl-logging" containerID="cri-o://d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.222238 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" containerID="cri-o://924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" gracePeriod=30 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.868338 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/3.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.875821 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovn-acl-logging/0.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.876513 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovn-controller/0.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.877154 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.935657 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r6k8t"] Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.935978 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.935997 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936011 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936019 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936046 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936055 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936065 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kubecfg-setup" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936072 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kubecfg-setup" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936082 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" containerName="registry" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936089 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" containerName="registry" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936098 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-acl-logging" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936105 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-acl-logging" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936160 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936170 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936182 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-node" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936332 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-node" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936346 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="nbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936355 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="nbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936367 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936376 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936384 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="sbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936394 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="sbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936409 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="northd" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936417 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="northd" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936576 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="43aa78b0-65ce-4b63-a2fe-6cc0ecf3be08" containerName="registry" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936586 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936594 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovn-acl-logging" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936608 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936618 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936628 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="sbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936640 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="nbdb" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936650 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="kube-rbac-proxy-node" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936663 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936671 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="northd" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936680 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936689 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936922 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936937 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: E0130 21:54:38.936951 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.936958 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.937106 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerName="ovnkube-controller" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.941837 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.943985 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-log-socket\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944072 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d2f440e9-633b-41c3-ba83-2f6195004621-ovn-node-metrics-cert\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944111 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944133 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-kubelet\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944166 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-node-log\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944227 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-script-lib\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944278 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944311 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-var-lib-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944349 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-etc-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944372 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-netns\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944424 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-bin\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944445 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-slash\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944472 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-config\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944497 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-systemd-units\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944527 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944556 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-677b9\" (UniqueName: \"kubernetes.io/projected/d2f440e9-633b-41c3-ba83-2f6195004621-kube-api-access-677b9\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944588 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-ovn\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944708 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-systemd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944766 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-netd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.944857 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-env-overrides\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.983825 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/2.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.984494 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/1.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.984564 4979 generic.go:334] "Generic (PLEG): container finished" podID="6722e8df-a635-4808-b6b9-d5633fc3d34b" containerID="63eeeb7e581e8ce3888839e2e83b0b7c4eb60c14ab5554f1fd5b47b9651c9ea0" exitCode=2 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.984636 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerDied","Data":"63eeeb7e581e8ce3888839e2e83b0b7c4eb60c14ab5554f1fd5b47b9651c9ea0"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.984696 4979 scope.go:117] "RemoveContainer" containerID="94ff9dd2fea9915248532747b48b5ce5d57958ed37353aa04155196c2d910ca5" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.985797 4979 scope.go:117] "RemoveContainer" containerID="63eeeb7e581e8ce3888839e2e83b0b7c4eb60c14ab5554f1fd5b47b9651c9ea0" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.987919 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovnkube-controller/3.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.990215 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovn-acl-logging/0.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.990598 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jttsv_34ce4851-1ecc-47da-89ca-09894eb0908a/ovn-controller/0.log" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992293 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992316 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992323 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992332 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992361 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992369 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" exitCode=0 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992377 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" exitCode=143 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992383 4979 generic.go:334] "Generic (PLEG): container finished" podID="34ce4851-1ecc-47da-89ca-09894eb0908a" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" exitCode=143 Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992400 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992435 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992449 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992458 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992456 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992468 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992581 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992595 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992608 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992614 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992620 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992625 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992632 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992638 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992644 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992651 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992657 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992668 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992677 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992684 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992691 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992697 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992704 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992711 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992718 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992725 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992732 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992739 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992750 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992762 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992769 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992776 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992783 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992789 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992795 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992801 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992810 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992817 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992824 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992833 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jttsv" event={"ID":"34ce4851-1ecc-47da-89ca-09894eb0908a","Type":"ContainerDied","Data":"f18a371d736e6911b0f592f8daaea8c3e8cd37b3a1facadbee20dabf9d3b9ce4"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992843 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992852 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992858 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992864 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992872 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992880 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992888 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992896 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992902 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} Jan 30 21:54:38 crc kubenswrapper[4979]: I0130 21:54:38.992908 4979 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.027427 4979 scope.go:117] "RemoveContainer" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.045598 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046004 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046085 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046102 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046141 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046163 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046210 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046308 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046374 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046669 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.046750 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.047948 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gg6r\" (UniqueName: \"kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048008 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048064 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048106 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048123 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048140 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048230 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048249 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048178 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048195 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048326 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048358 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048430 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048354 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket" (OuterVolumeSpecName: "log-socket") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048378 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048392 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048550 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048815 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048454 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048881 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048899 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048976 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash" (OuterVolumeSpecName: "host-slash") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.048916 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log\") pod \"34ce4851-1ecc-47da-89ca-09894eb0908a\" (UID: \"34ce4851-1ecc-47da-89ca-09894eb0908a\") " Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049156 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log" (OuterVolumeSpecName: "node-log") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049259 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-netd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049284 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-env-overrides\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049343 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-netd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049427 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049449 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-log-socket\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049467 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d2f440e9-633b-41c3-ba83-2f6195004621-ovn-node-metrics-cert\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049494 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-kubelet\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049508 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049537 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-log-socket\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049574 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-kubelet\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049574 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049791 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-node-log\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049876 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-node-log\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049939 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-script-lib\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049965 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049984 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-var-lib-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.049996 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-env-overrides\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050059 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-etc-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050066 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050083 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-netns\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050109 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-var-lib-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050121 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-bin\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050137 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-run-netns\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050149 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-slash\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050161 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-etc-openvswitch\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050184 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-cni-bin\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050215 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-config\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050240 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-systemd-units\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050267 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050287 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-677b9\" (UniqueName: \"kubernetes.io/projected/d2f440e9-633b-41c3-ba83-2f6195004621-kube-api-access-677b9\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-ovn\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050349 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-systemd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050399 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050498 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-script-lib\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050640 4979 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050664 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-systemd-units\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050681 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-systemd\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050700 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-run-ovn\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050717 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d2f440e9-633b-41c3-ba83-2f6195004621-host-slash\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050732 4979 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050748 4979 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050761 4979 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050774 4979 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050787 4979 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050795 4979 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050804 4979 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050814 4979 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050823 4979 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050832 4979 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050844 4979 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050857 4979 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050869 4979 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050881 4979 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34ce4851-1ecc-47da-89ca-09894eb0908a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050890 4979 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050900 4979 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.050908 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d2f440e9-633b-41c3-ba83-2f6195004621-ovnkube-config\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.053265 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r" (OuterVolumeSpecName: "kube-api-access-5gg6r") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "kube-api-access-5gg6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.053938 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d2f440e9-633b-41c3-ba83-2f6195004621-ovn-node-metrics-cert\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.061289 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.066378 4979 scope.go:117] "RemoveContainer" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.068152 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "34ce4851-1ecc-47da-89ca-09894eb0908a" (UID: "34ce4851-1ecc-47da-89ca-09894eb0908a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.070682 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-677b9\" (UniqueName: \"kubernetes.io/projected/d2f440e9-633b-41c3-ba83-2f6195004621-kube-api-access-677b9\") pod \"ovnkube-node-r6k8t\" (UID: \"d2f440e9-633b-41c3-ba83-2f6195004621\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.087835 4979 scope.go:117] "RemoveContainer" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.102093 4979 scope.go:117] "RemoveContainer" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.115631 4979 scope.go:117] "RemoveContainer" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.150386 4979 scope.go:117] "RemoveContainer" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.154756 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gg6r\" (UniqueName: \"kubernetes.io/projected/34ce4851-1ecc-47da-89ca-09894eb0908a-kube-api-access-5gg6r\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.154783 4979 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34ce4851-1ecc-47da-89ca-09894eb0908a-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.154798 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34ce4851-1ecc-47da-89ca-09894eb0908a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.166746 4979 scope.go:117] "RemoveContainer" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.179549 4979 scope.go:117] "RemoveContainer" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.195204 4979 scope.go:117] "RemoveContainer" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.208289 4979 scope.go:117] "RemoveContainer" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.208766 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": container with ID starting with 924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d not found: ID does not exist" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.208806 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} err="failed to get container status \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": rpc error: code = NotFound desc = could not find container \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": container with ID starting with 924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.208835 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.209147 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": container with ID starting with 88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9 not found: ID does not exist" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209169 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} err="failed to get container status \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": rpc error: code = NotFound desc = could not find container \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": container with ID starting with 88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209185 4979 scope.go:117] "RemoveContainer" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.209399 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": container with ID starting with 47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9 not found: ID does not exist" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209424 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} err="failed to get container status \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": rpc error: code = NotFound desc = could not find container \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": container with ID starting with 47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209440 4979 scope.go:117] "RemoveContainer" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.209666 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": container with ID starting with 0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f not found: ID does not exist" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209687 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} err="failed to get container status \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": rpc error: code = NotFound desc = could not find container \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": container with ID starting with 0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.209702 4979 scope.go:117] "RemoveContainer" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.210027 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": container with ID starting with ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23 not found: ID does not exist" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210061 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} err="failed to get container status \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": rpc error: code = NotFound desc = could not find container \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": container with ID starting with ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210074 4979 scope.go:117] "RemoveContainer" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.210291 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": container with ID starting with 9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af not found: ID does not exist" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210312 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} err="failed to get container status \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": rpc error: code = NotFound desc = could not find container \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": container with ID starting with 9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210323 4979 scope.go:117] "RemoveContainer" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.210485 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": container with ID starting with c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce not found: ID does not exist" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210509 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} err="failed to get container status \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": rpc error: code = NotFound desc = could not find container \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": container with ID starting with c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210523 4979 scope.go:117] "RemoveContainer" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.210697 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": container with ID starting with d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56 not found: ID does not exist" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210723 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} err="failed to get container status \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": rpc error: code = NotFound desc = could not find container \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": container with ID starting with d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210738 4979 scope.go:117] "RemoveContainer" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.210936 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": container with ID starting with 2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab not found: ID does not exist" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210956 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} err="failed to get container status \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": rpc error: code = NotFound desc = could not find container \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": container with ID starting with 2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.210969 4979 scope.go:117] "RemoveContainer" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: E0130 21:54:39.211154 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": container with ID starting with bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581 not found: ID does not exist" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211175 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} err="failed to get container status \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": rpc error: code = NotFound desc = could not find container \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": container with ID starting with bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211186 4979 scope.go:117] "RemoveContainer" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211357 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} err="failed to get container status \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": rpc error: code = NotFound desc = could not find container \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": container with ID starting with 924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211373 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211563 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} err="failed to get container status \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": rpc error: code = NotFound desc = could not find container \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": container with ID starting with 88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211587 4979 scope.go:117] "RemoveContainer" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211769 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} err="failed to get container status \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": rpc error: code = NotFound desc = could not find container \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": container with ID starting with 47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211786 4979 scope.go:117] "RemoveContainer" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211948 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} err="failed to get container status \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": rpc error: code = NotFound desc = could not find container \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": container with ID starting with 0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.211971 4979 scope.go:117] "RemoveContainer" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212177 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} err="failed to get container status \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": rpc error: code = NotFound desc = could not find container \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": container with ID starting with ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212200 4979 scope.go:117] "RemoveContainer" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212440 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} err="failed to get container status \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": rpc error: code = NotFound desc = could not find container \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": container with ID starting with 9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212466 4979 scope.go:117] "RemoveContainer" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212668 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} err="failed to get container status \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": rpc error: code = NotFound desc = could not find container \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": container with ID starting with c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212688 4979 scope.go:117] "RemoveContainer" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212839 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} err="failed to get container status \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": rpc error: code = NotFound desc = could not find container \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": container with ID starting with d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.212859 4979 scope.go:117] "RemoveContainer" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213013 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} err="failed to get container status \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": rpc error: code = NotFound desc = could not find container \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": container with ID starting with 2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213048 4979 scope.go:117] "RemoveContainer" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213213 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} err="failed to get container status \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": rpc error: code = NotFound desc = could not find container \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": container with ID starting with bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213232 4979 scope.go:117] "RemoveContainer" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213444 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} err="failed to get container status \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": rpc error: code = NotFound desc = could not find container \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": container with ID starting with 924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.213474 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214049 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} err="failed to get container status \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": rpc error: code = NotFound desc = could not find container \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": container with ID starting with 88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214105 4979 scope.go:117] "RemoveContainer" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214410 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} err="failed to get container status \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": rpc error: code = NotFound desc = could not find container \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": container with ID starting with 47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214432 4979 scope.go:117] "RemoveContainer" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214622 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} err="failed to get container status \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": rpc error: code = NotFound desc = could not find container \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": container with ID starting with 0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214642 4979 scope.go:117] "RemoveContainer" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214866 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} err="failed to get container status \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": rpc error: code = NotFound desc = could not find container \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": container with ID starting with ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.214883 4979 scope.go:117] "RemoveContainer" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215083 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} err="failed to get container status \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": rpc error: code = NotFound desc = could not find container \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": container with ID starting with 9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215099 4979 scope.go:117] "RemoveContainer" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215268 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} err="failed to get container status \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": rpc error: code = NotFound desc = could not find container \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": container with ID starting with c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215285 4979 scope.go:117] "RemoveContainer" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215438 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} err="failed to get container status \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": rpc error: code = NotFound desc = could not find container \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": container with ID starting with d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215463 4979 scope.go:117] "RemoveContainer" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215628 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} err="failed to get container status \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": rpc error: code = NotFound desc = could not find container \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": container with ID starting with 2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215645 4979 scope.go:117] "RemoveContainer" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215841 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} err="failed to get container status \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": rpc error: code = NotFound desc = could not find container \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": container with ID starting with bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.215859 4979 scope.go:117] "RemoveContainer" containerID="924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216008 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d"} err="failed to get container status \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": rpc error: code = NotFound desc = could not find container \"924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d\": container with ID starting with 924ea2590c28a2b5d5147234204fba0f7c151e7c02ca1c877b4bf09ae581d86d not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216025 4979 scope.go:117] "RemoveContainer" containerID="88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216264 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9"} err="failed to get container status \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": rpc error: code = NotFound desc = could not find container \"88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9\": container with ID starting with 88786cf3de55ad343e6cfaf9b5ac0d14c5d50c2700d0be7041a73a51deac56e9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216284 4979 scope.go:117] "RemoveContainer" containerID="47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216525 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9"} err="failed to get container status \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": rpc error: code = NotFound desc = could not find container \"47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9\": container with ID starting with 47edf78b037a66c6488566868f4b69679059965dcf9db5a260985f7de83cb1b9 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216552 4979 scope.go:117] "RemoveContainer" containerID="0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216746 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f"} err="failed to get container status \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": rpc error: code = NotFound desc = could not find container \"0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f\": container with ID starting with 0c2c1a8cea5ab7c7a18a770a3ef3078d47957e86b5a4f5b4588dc965b196806f not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216767 4979 scope.go:117] "RemoveContainer" containerID="ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216923 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23"} err="failed to get container status \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": rpc error: code = NotFound desc = could not find container \"ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23\": container with ID starting with ddbf100bdc5b68db57f992bd0e1858af8ec748f2ced6ea8f3abab00559bd3e23 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.216943 4979 scope.go:117] "RemoveContainer" containerID="9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217157 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af"} err="failed to get container status \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": rpc error: code = NotFound desc = could not find container \"9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af\": container with ID starting with 9917a453c3b1ad78a939c3b0119b9fd85979a9502f6104215108629733bd49af not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217173 4979 scope.go:117] "RemoveContainer" containerID="c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217333 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce"} err="failed to get container status \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": rpc error: code = NotFound desc = could not find container \"c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce\": container with ID starting with c374b88e5476c1631edaa3fe4710b25f72d0c00e839a092c5615975fc6a793ce not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217351 4979 scope.go:117] "RemoveContainer" containerID="d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217515 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56"} err="failed to get container status \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": rpc error: code = NotFound desc = could not find container \"d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56\": container with ID starting with d2add1022c8ca61332a73caba3d30bd136f85e050d7c266bee8c394998e09b56 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217533 4979 scope.go:117] "RemoveContainer" containerID="2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217699 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab"} err="failed to get container status \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": rpc error: code = NotFound desc = could not find container \"2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab\": container with ID starting with 2a693a0a3d74485608e84c933e034773c8ec226976eaf07a81497d219d07c3ab not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217717 4979 scope.go:117] "RemoveContainer" containerID="bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.217929 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581"} err="failed to get container status \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": rpc error: code = NotFound desc = could not find container \"bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581\": container with ID starting with bfd948edbf058c1d853dfa996931f2d3cf7d6e7f3b575359f4bb15cd5fde4581 not found: ID does not exist" Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.257556 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:39 crc kubenswrapper[4979]: W0130 21:54:39.277110 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2f440e9_633b_41c3_ba83_2f6195004621.slice/crio-68da92ce926cdd69f9e0879fde7765b6e94989a08de2721afd83610884e2a5ec WatchSource:0}: Error finding container 68da92ce926cdd69f9e0879fde7765b6e94989a08de2721afd83610884e2a5ec: Status 404 returned error can't find the container with id 68da92ce926cdd69f9e0879fde7765b6e94989a08de2721afd83610884e2a5ec Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.359839 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jttsv"] Jan 30 21:54:39 crc kubenswrapper[4979]: I0130 21:54:39.368591 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jttsv"] Jan 30 21:54:40 crc kubenswrapper[4979]: I0130 21:54:40.001364 4979 generic.go:334] "Generic (PLEG): container finished" podID="d2f440e9-633b-41c3-ba83-2f6195004621" containerID="9680da53a0072a0af82bfce315277c7afb6976e51132839d2841b4f0e32443f0" exitCode=0 Jan 30 21:54:40 crc kubenswrapper[4979]: I0130 21:54:40.001469 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerDied","Data":"9680da53a0072a0af82bfce315277c7afb6976e51132839d2841b4f0e32443f0"} Jan 30 21:54:40 crc kubenswrapper[4979]: I0130 21:54:40.001885 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"68da92ce926cdd69f9e0879fde7765b6e94989a08de2721afd83610884e2a5ec"} Jan 30 21:54:40 crc kubenswrapper[4979]: I0130 21:54:40.005879 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xh5mg_6722e8df-a635-4808-b6b9-d5633fc3d34b/kube-multus/2.log" Jan 30 21:54:40 crc kubenswrapper[4979]: I0130 21:54:40.005975 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xh5mg" event={"ID":"6722e8df-a635-4808-b6b9-d5633fc3d34b","Type":"ContainerStarted","Data":"a648a1eb896eede38c93068819f0a43dcb99f6f9b3238b3b3b8e7809fbcad058"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015182 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"af08390c250f77eff24182962a6e359d6b0cb16fa868bb928a683b7b8323ecef"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015694 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"6c07ebecd69fc763f832be746633a10addbf9766e08f97166d597636e225949f"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015706 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"dc054c51d90538de6f6427e3d1e3cbd73304ef4d999826fefd09881596a2d10f"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015717 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"008ea364b9b5969ccf3c5ae14e9ee0742d62d5b5e4778de92aa0ac20ce39927f"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"8537d3457c3cf1fa642e4dfdba5ffdd2845d49470adecfe3dfa4e5916f41025f"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.015737 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"bd873d06f285769c04a9f10e3b6bce4034ae849d3c9d8be50b28bc09db86cbbd"} Jan 30 21:54:41 crc kubenswrapper[4979]: I0130 21:54:41.076895 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ce4851-1ecc-47da-89ca-09894eb0908a" path="/var/lib/kubelet/pods/34ce4851-1ecc-47da-89ca-09894eb0908a/volumes" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.343168 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-sr9vn"] Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.345346 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.347744 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.349052 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.349129 4979 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-jpprx" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.349384 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.419105 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.419190 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.419219 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kd4\" (UniqueName: \"kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.520501 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.520574 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8kd4\" (UniqueName: \"kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.520677 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.521023 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.521831 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.547856 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8kd4\" (UniqueName: \"kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4\") pod \"crc-storage-crc-sr9vn\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: I0130 21:54:43.664328 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: E0130 21:54:43.707148 4979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(b9e0c72acd71c59d90f2f960b69088d1d974c89601bd186df05a8df0210eb1e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:54:43 crc kubenswrapper[4979]: E0130 21:54:43.707273 4979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(b9e0c72acd71c59d90f2f960b69088d1d974c89601bd186df05a8df0210eb1e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: E0130 21:54:43.707314 4979 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(b9e0c72acd71c59d90f2f960b69088d1d974c89601bd186df05a8df0210eb1e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:43 crc kubenswrapper[4979]: E0130 21:54:43.707406 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-sr9vn_crc-storage(55b164f6-7e71-4403-9598-6673cea6876e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-sr9vn_crc-storage(55b164f6-7e71-4403-9598-6673cea6876e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(b9e0c72acd71c59d90f2f960b69088d1d974c89601bd186df05a8df0210eb1e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-sr9vn" podUID="55b164f6-7e71-4403-9598-6673cea6876e" Jan 30 21:54:44 crc kubenswrapper[4979]: I0130 21:54:44.041251 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"a28a10e271e9ed978b0682bd3665e6f3d83f376ede5a973f88697477ecbd6431"} Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.066504 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" event={"ID":"d2f440e9-633b-41c3-ba83-2f6195004621","Type":"ContainerStarted","Data":"ce791bd174715dd1afd21de50ab981e47a723de17d179408b2e0b5298441e592"} Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.069242 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.069825 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.114237 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.122381 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" podStartSLOduration=8.122357196 podStartE2EDuration="8.122357196s" podCreationTimestamp="2026-01-30 21:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:54:46.116392058 +0000 UTC m=+882.077639121" watchObservedRunningTime="2026-01-30 21:54:46.122357196 +0000 UTC m=+882.083604229" Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.875862 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-sr9vn"] Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.876000 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:46 crc kubenswrapper[4979]: I0130 21:54:46.876557 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:46 crc kubenswrapper[4979]: E0130 21:54:46.921699 4979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(463a58c0a2dd83c86aab3b4f597eb48be5a59592414517910e64334709fb8984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:54:46 crc kubenswrapper[4979]: E0130 21:54:46.934160 4979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(463a58c0a2dd83c86aab3b4f597eb48be5a59592414517910e64334709fb8984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:46 crc kubenswrapper[4979]: E0130 21:54:46.934223 4979 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(463a58c0a2dd83c86aab3b4f597eb48be5a59592414517910e64334709fb8984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:54:46 crc kubenswrapper[4979]: E0130 21:54:46.934293 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-sr9vn_crc-storage(55b164f6-7e71-4403-9598-6673cea6876e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-sr9vn_crc-storage(55b164f6-7e71-4403-9598-6673cea6876e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-sr9vn_crc-storage_55b164f6-7e71-4403-9598-6673cea6876e_0(463a58c0a2dd83c86aab3b4f597eb48be5a59592414517910e64334709fb8984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-sr9vn" podUID="55b164f6-7e71-4403-9598-6673cea6876e" Jan 30 21:54:47 crc kubenswrapper[4979]: I0130 21:54:47.079177 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:47 crc kubenswrapper[4979]: I0130 21:54:47.109832 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.417730 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.420296 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.440357 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.507665 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.508060 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.508206 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q9sv\" (UniqueName: \"kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.609815 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.610325 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.610448 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q9sv\" (UniqueName: \"kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.610559 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.610879 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.635655 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q9sv\" (UniqueName: \"kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv\") pod \"certified-operators-stz2f\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:54 crc kubenswrapper[4979]: I0130 21:54:54.744907 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:54:55 crc kubenswrapper[4979]: I0130 21:54:55.034335 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:54:55 crc kubenswrapper[4979]: W0130 21:54:55.054492 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93c0d611_5c8f_4ae6_93d4_d5029516ea1e.slice/crio-ad287acce301a9d5c489a3ffb41cd669c0ade05cd8648a54675ae6665236e7df WatchSource:0}: Error finding container ad287acce301a9d5c489a3ffb41cd669c0ade05cd8648a54675ae6665236e7df: Status 404 returned error can't find the container with id ad287acce301a9d5c489a3ffb41cd669c0ade05cd8648a54675ae6665236e7df Jan 30 21:54:55 crc kubenswrapper[4979]: I0130 21:54:55.129295 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerStarted","Data":"ad287acce301a9d5c489a3ffb41cd669c0ade05cd8648a54675ae6665236e7df"} Jan 30 21:54:56 crc kubenswrapper[4979]: I0130 21:54:56.153967 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerDied","Data":"0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf"} Jan 30 21:54:56 crc kubenswrapper[4979]: I0130 21:54:56.153970 4979 generic.go:334] "Generic (PLEG): container finished" podID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerID="0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf" exitCode=0 Jan 30 21:54:56 crc kubenswrapper[4979]: I0130 21:54:56.157240 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:54:57 crc kubenswrapper[4979]: I0130 21:54:57.162667 4979 generic.go:334] "Generic (PLEG): container finished" podID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerID="65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8" exitCode=0 Jan 30 21:54:57 crc kubenswrapper[4979]: I0130 21:54:57.163170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerDied","Data":"65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8"} Jan 30 21:54:58 crc kubenswrapper[4979]: I0130 21:54:58.171821 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerStarted","Data":"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752"} Jan 30 21:54:58 crc kubenswrapper[4979]: I0130 21:54:58.203026 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-stz2f" podStartSLOduration=2.768167596 podStartE2EDuration="4.202998294s" podCreationTimestamp="2026-01-30 21:54:54 +0000 UTC" firstStartedPulling="2026-01-30 21:54:56.156946948 +0000 UTC m=+892.118193981" lastFinishedPulling="2026-01-30 21:54:57.591777636 +0000 UTC m=+893.553024679" observedRunningTime="2026-01-30 21:54:58.201067013 +0000 UTC m=+894.162314056" watchObservedRunningTime="2026-01-30 21:54:58.202998294 +0000 UTC m=+894.164245357" Jan 30 21:55:02 crc kubenswrapper[4979]: I0130 21:55:02.039632 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:55:02 crc kubenswrapper[4979]: I0130 21:55:02.040255 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:55:02 crc kubenswrapper[4979]: I0130 21:55:02.069873 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:55:02 crc kubenswrapper[4979]: I0130 21:55:02.070845 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:55:02 crc kubenswrapper[4979]: I0130 21:55:02.546828 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-sr9vn"] Jan 30 21:55:03 crc kubenswrapper[4979]: I0130 21:55:03.221408 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sr9vn" event={"ID":"55b164f6-7e71-4403-9598-6673cea6876e","Type":"ContainerStarted","Data":"ccc67f80dbda21ecf36ae40de3aab4b305feec6ba1350334879156336efd5488"} Jan 30 21:55:04 crc kubenswrapper[4979]: I0130 21:55:04.228765 4979 generic.go:334] "Generic (PLEG): container finished" podID="55b164f6-7e71-4403-9598-6673cea6876e" containerID="f69e5e60ca65ac037198a7875cb73ae5dd60bb9ab12c82aead51159afd7e44ab" exitCode=0 Jan 30 21:55:04 crc kubenswrapper[4979]: I0130 21:55:04.229255 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sr9vn" event={"ID":"55b164f6-7e71-4403-9598-6673cea6876e","Type":"ContainerDied","Data":"f69e5e60ca65ac037198a7875cb73ae5dd60bb9ab12c82aead51159afd7e44ab"} Jan 30 21:55:04 crc kubenswrapper[4979]: I0130 21:55:04.745515 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:04 crc kubenswrapper[4979]: I0130 21:55:04.745592 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:04 crc kubenswrapper[4979]: I0130 21:55:04.793201 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.298866 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.356203 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.577410 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.682503 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8kd4\" (UniqueName: \"kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4\") pod \"55b164f6-7e71-4403-9598-6673cea6876e\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.682616 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage\") pod \"55b164f6-7e71-4403-9598-6673cea6876e\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.682732 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt\") pod \"55b164f6-7e71-4403-9598-6673cea6876e\" (UID: \"55b164f6-7e71-4403-9598-6673cea6876e\") " Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.683244 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "55b164f6-7e71-4403-9598-6673cea6876e" (UID: "55b164f6-7e71-4403-9598-6673cea6876e"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.691615 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4" (OuterVolumeSpecName: "kube-api-access-w8kd4") pod "55b164f6-7e71-4403-9598-6673cea6876e" (UID: "55b164f6-7e71-4403-9598-6673cea6876e"). InnerVolumeSpecName "kube-api-access-w8kd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.705011 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "55b164f6-7e71-4403-9598-6673cea6876e" (UID: "55b164f6-7e71-4403-9598-6673cea6876e"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.784746 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8kd4\" (UniqueName: \"kubernetes.io/projected/55b164f6-7e71-4403-9598-6673cea6876e-kube-api-access-w8kd4\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.784807 4979 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/55b164f6-7e71-4403-9598-6673cea6876e-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:05 crc kubenswrapper[4979]: I0130 21:55:05.784826 4979 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/55b164f6-7e71-4403-9598-6673cea6876e-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:06 crc kubenswrapper[4979]: I0130 21:55:06.248768 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sr9vn" Jan 30 21:55:06 crc kubenswrapper[4979]: I0130 21:55:06.248789 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sr9vn" event={"ID":"55b164f6-7e71-4403-9598-6673cea6876e","Type":"ContainerDied","Data":"ccc67f80dbda21ecf36ae40de3aab4b305feec6ba1350334879156336efd5488"} Jan 30 21:55:06 crc kubenswrapper[4979]: I0130 21:55:06.248888 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc67f80dbda21ecf36ae40de3aab4b305feec6ba1350334879156336efd5488" Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.255409 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-stz2f" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="registry-server" containerID="cri-o://62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752" gracePeriod=2 Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.706909 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.819253 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities\") pod \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.819316 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q9sv\" (UniqueName: \"kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv\") pod \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.819390 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content\") pod \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\" (UID: \"93c0d611-5c8f-4ae6-93d4-d5029516ea1e\") " Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.821444 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities" (OuterVolumeSpecName: "utilities") pod "93c0d611-5c8f-4ae6-93d4-d5029516ea1e" (UID: "93c0d611-5c8f-4ae6-93d4-d5029516ea1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.829276 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv" (OuterVolumeSpecName: "kube-api-access-4q9sv") pod "93c0d611-5c8f-4ae6-93d4-d5029516ea1e" (UID: "93c0d611-5c8f-4ae6-93d4-d5029516ea1e"). InnerVolumeSpecName "kube-api-access-4q9sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.921591 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:07 crc kubenswrapper[4979]: I0130 21:55:07.921671 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q9sv\" (UniqueName: \"kubernetes.io/projected/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-kube-api-access-4q9sv\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.265201 4979 generic.go:334] "Generic (PLEG): container finished" podID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerID="62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752" exitCode=0 Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.265274 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerDied","Data":"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752"} Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.265344 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz2f" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.265382 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz2f" event={"ID":"93c0d611-5c8f-4ae6-93d4-d5029516ea1e","Type":"ContainerDied","Data":"ad287acce301a9d5c489a3ffb41cd669c0ade05cd8648a54675ae6665236e7df"} Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.265454 4979 scope.go:117] "RemoveContainer" containerID="62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.293998 4979 scope.go:117] "RemoveContainer" containerID="65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.322438 4979 scope.go:117] "RemoveContainer" containerID="0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.346838 4979 scope.go:117] "RemoveContainer" containerID="62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752" Jan 30 21:55:08 crc kubenswrapper[4979]: E0130 21:55:08.347730 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752\": container with ID starting with 62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752 not found: ID does not exist" containerID="62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.347823 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752"} err="failed to get container status \"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752\": rpc error: code = NotFound desc = could not find container \"62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752\": container with ID starting with 62cb33243e8e22fe01212765391f3f059cb126b37d37d768a7e210273aafa752 not found: ID does not exist" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.347868 4979 scope.go:117] "RemoveContainer" containerID="65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8" Jan 30 21:55:08 crc kubenswrapper[4979]: E0130 21:55:08.348713 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8\": container with ID starting with 65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8 not found: ID does not exist" containerID="65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.348776 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8"} err="failed to get container status \"65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8\": rpc error: code = NotFound desc = could not find container \"65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8\": container with ID starting with 65c726685dea6f820514610c122b0c9ae4e5d7172e4d2579441bc226b6311da8 not found: ID does not exist" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.348816 4979 scope.go:117] "RemoveContainer" containerID="0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf" Jan 30 21:55:08 crc kubenswrapper[4979]: E0130 21:55:08.349729 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf\": container with ID starting with 0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf not found: ID does not exist" containerID="0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.349762 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf"} err="failed to get container status \"0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf\": rpc error: code = NotFound desc = could not find container \"0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf\": container with ID starting with 0d0fbe6d4bdb0edc4edbc4e6abd17a9db7ed32e9878ece57a3272adb579d9dcf not found: ID does not exist" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.859655 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93c0d611-5c8f-4ae6-93d4-d5029516ea1e" (UID: "93c0d611-5c8f-4ae6-93d4-d5029516ea1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.927504 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.938946 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-stz2f"] Jan 30 21:55:08 crc kubenswrapper[4979]: I0130 21:55:08.940181 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0d611-5c8f-4ae6-93d4-d5029516ea1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:09 crc kubenswrapper[4979]: I0130 21:55:09.082156 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" path="/var/lib/kubelet/pods/93c0d611-5c8f-4ae6-93d4-d5029516ea1e/volumes" Jan 30 21:55:09 crc kubenswrapper[4979]: I0130 21:55:09.282110 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r6k8t" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.428770 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9"] Jan 30 21:55:12 crc kubenswrapper[4979]: E0130 21:55:12.429445 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b164f6-7e71-4403-9598-6673cea6876e" containerName="storage" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429493 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b164f6-7e71-4403-9598-6673cea6876e" containerName="storage" Jan 30 21:55:12 crc kubenswrapper[4979]: E0130 21:55:12.429509 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="extract-content" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429521 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="extract-content" Jan 30 21:55:12 crc kubenswrapper[4979]: E0130 21:55:12.429536 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="extract-utilities" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429545 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="extract-utilities" Jan 30 21:55:12 crc kubenswrapper[4979]: E0130 21:55:12.429559 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="registry-server" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429569 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="registry-server" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429700 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c0d611-5c8f-4ae6-93d4-d5029516ea1e" containerName="registry-server" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.429713 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b164f6-7e71-4403-9598-6673cea6876e" containerName="storage" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.430930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.434566 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.439919 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9"] Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.594796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7ht\" (UniqueName: \"kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.594898 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.595359 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.697838 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.698085 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7ht\" (UniqueName: \"kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.698162 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.699636 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.699718 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.723952 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7ht\" (UniqueName: \"kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:12 crc kubenswrapper[4979]: I0130 21:55:12.752417 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:13 crc kubenswrapper[4979]: I0130 21:55:13.051831 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9"] Jan 30 21:55:13 crc kubenswrapper[4979]: I0130 21:55:13.305700 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerStarted","Data":"91bb921e4350bacb30f3ab5fa2b4c1c8cc38f05ec6f493986bfecc12204a0dfd"} Jan 30 21:55:13 crc kubenswrapper[4979]: I0130 21:55:13.306223 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerStarted","Data":"d0fb9a08ccc09bee63c7ea1c38b7828a31ec56f46b1f756a4c20c8dafcd8507b"} Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.315952 4979 generic.go:334] "Generic (PLEG): container finished" podID="24460103-3748-49b9-9231-5a6e63ede52c" containerID="91bb921e4350bacb30f3ab5fa2b4c1c8cc38f05ec6f493986bfecc12204a0dfd" exitCode=0 Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.316019 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerDied","Data":"91bb921e4350bacb30f3ab5fa2b4c1c8cc38f05ec6f493986bfecc12204a0dfd"} Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.641597 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.643356 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.666004 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.752883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.753389 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.753474 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqdjn\" (UniqueName: \"kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.854416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.854472 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.854533 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqdjn\" (UniqueName: \"kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.855287 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.855515 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.890373 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqdjn\" (UniqueName: \"kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn\") pod \"redhat-operators-fhr5r\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:14 crc kubenswrapper[4979]: I0130 21:55:14.962685 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:15 crc kubenswrapper[4979]: I0130 21:55:15.203912 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:15 crc kubenswrapper[4979]: W0130 21:55:15.211118 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod673080e1_83e2_49f1_9c9a_713fb9367bea.slice/crio-eeeef076cf56a5e32adddbc1f962f1d02924f51e813862b47bf3524625754d23 WatchSource:0}: Error finding container eeeef076cf56a5e32adddbc1f962f1d02924f51e813862b47bf3524625754d23: Status 404 returned error can't find the container with id eeeef076cf56a5e32adddbc1f962f1d02924f51e813862b47bf3524625754d23 Jan 30 21:55:15 crc kubenswrapper[4979]: I0130 21:55:15.321984 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerStarted","Data":"eeeef076cf56a5e32adddbc1f962f1d02924f51e813862b47bf3524625754d23"} Jan 30 21:55:16 crc kubenswrapper[4979]: I0130 21:55:16.330423 4979 generic.go:334] "Generic (PLEG): container finished" podID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerID="c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988" exitCode=0 Jan 30 21:55:16 crc kubenswrapper[4979]: I0130 21:55:16.330545 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerDied","Data":"c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988"} Jan 30 21:55:16 crc kubenswrapper[4979]: I0130 21:55:16.333288 4979 generic.go:334] "Generic (PLEG): container finished" podID="24460103-3748-49b9-9231-5a6e63ede52c" containerID="a82e023663383677302026ce5a7796bcf301b4b1d7880563e3e891cec23be5d4" exitCode=0 Jan 30 21:55:16 crc kubenswrapper[4979]: I0130 21:55:16.333385 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerDied","Data":"a82e023663383677302026ce5a7796bcf301b4b1d7880563e3e891cec23be5d4"} Jan 30 21:55:17 crc kubenswrapper[4979]: I0130 21:55:17.345542 4979 generic.go:334] "Generic (PLEG): container finished" podID="24460103-3748-49b9-9231-5a6e63ede52c" containerID="4abd4f323f30af2af9b11b46065d16ea9b02941c97bba6155cee77d904dac6f1" exitCode=0 Jan 30 21:55:17 crc kubenswrapper[4979]: I0130 21:55:17.345632 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerDied","Data":"4abd4f323f30af2af9b11b46065d16ea9b02941c97bba6155cee77d904dac6f1"} Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.354485 4979 generic.go:334] "Generic (PLEG): container finished" podID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerID="3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60" exitCode=0 Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.354595 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerDied","Data":"3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60"} Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.602757 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.713842 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util\") pod \"24460103-3748-49b9-9231-5a6e63ede52c\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.713988 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7ht\" (UniqueName: \"kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht\") pod \"24460103-3748-49b9-9231-5a6e63ede52c\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.714094 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle\") pod \"24460103-3748-49b9-9231-5a6e63ede52c\" (UID: \"24460103-3748-49b9-9231-5a6e63ede52c\") " Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.717070 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle" (OuterVolumeSpecName: "bundle") pod "24460103-3748-49b9-9231-5a6e63ede52c" (UID: "24460103-3748-49b9-9231-5a6e63ede52c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.722727 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht" (OuterVolumeSpecName: "kube-api-access-ss7ht") pod "24460103-3748-49b9-9231-5a6e63ede52c" (UID: "24460103-3748-49b9-9231-5a6e63ede52c"). InnerVolumeSpecName "kube-api-access-ss7ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.797366 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util" (OuterVolumeSpecName: "util") pod "24460103-3748-49b9-9231-5a6e63ede52c" (UID: "24460103-3748-49b9-9231-5a6e63ede52c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.815374 4979 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.815416 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7ht\" (UniqueName: \"kubernetes.io/projected/24460103-3748-49b9-9231-5a6e63ede52c-kube-api-access-ss7ht\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:18 crc kubenswrapper[4979]: I0130 21:55:18.815426 4979 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24460103-3748-49b9-9231-5a6e63ede52c-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:19 crc kubenswrapper[4979]: I0130 21:55:19.362939 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" event={"ID":"24460103-3748-49b9-9231-5a6e63ede52c","Type":"ContainerDied","Data":"d0fb9a08ccc09bee63c7ea1c38b7828a31ec56f46b1f756a4c20c8dafcd8507b"} Jan 30 21:55:19 crc kubenswrapper[4979]: I0130 21:55:19.362996 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0fb9a08ccc09bee63c7ea1c38b7828a31ec56f46b1f756a4c20c8dafcd8507b" Jan 30 21:55:19 crc kubenswrapper[4979]: I0130 21:55:19.363052 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9" Jan 30 21:55:19 crc kubenswrapper[4979]: I0130 21:55:19.365428 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerStarted","Data":"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742"} Jan 30 21:55:19 crc kubenswrapper[4979]: I0130 21:55:19.387913 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fhr5r" podStartSLOduration=2.898179854 podStartE2EDuration="5.38788914s" podCreationTimestamp="2026-01-30 21:55:14 +0000 UTC" firstStartedPulling="2026-01-30 21:55:16.334867083 +0000 UTC m=+912.296114136" lastFinishedPulling="2026-01-30 21:55:18.824576389 +0000 UTC m=+914.785823422" observedRunningTime="2026-01-30 21:55:19.385623429 +0000 UTC m=+915.346870462" watchObservedRunningTime="2026-01-30 21:55:19.38788914 +0000 UTC m=+915.349136173" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.838312 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tv5t2"] Jan 30 21:55:23 crc kubenswrapper[4979]: E0130 21:55:23.839255 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="extract" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.839283 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="extract" Jan 30 21:55:23 crc kubenswrapper[4979]: E0130 21:55:23.839308 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="util" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.839320 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="util" Jan 30 21:55:23 crc kubenswrapper[4979]: E0130 21:55:23.839356 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="pull" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.839370 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="pull" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.839530 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="24460103-3748-49b9-9231-5a6e63ede52c" containerName="extract" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.840302 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.844521 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-57qqj" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.845396 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.847479 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.851772 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tv5t2"] Jan 30 21:55:23 crc kubenswrapper[4979]: I0130 21:55:23.988159 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxcp\" (UniqueName: \"kubernetes.io/projected/949791a2-d4bd-4ec8-8e34-70a2d0af1af1-kube-api-access-ggxcp\") pod \"nmstate-operator-646758c888-tv5t2\" (UID: \"949791a2-d4bd-4ec8-8e34-70a2d0af1af1\") " pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.090269 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxcp\" (UniqueName: \"kubernetes.io/projected/949791a2-d4bd-4ec8-8e34-70a2d0af1af1-kube-api-access-ggxcp\") pod \"nmstate-operator-646758c888-tv5t2\" (UID: \"949791a2-d4bd-4ec8-8e34-70a2d0af1af1\") " pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.118629 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxcp\" (UniqueName: \"kubernetes.io/projected/949791a2-d4bd-4ec8-8e34-70a2d0af1af1-kube-api-access-ggxcp\") pod \"nmstate-operator-646758c888-tv5t2\" (UID: \"949791a2-d4bd-4ec8-8e34-70a2d0af1af1\") " pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.161960 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.367252 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tv5t2"] Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.402613 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" event={"ID":"949791a2-d4bd-4ec8-8e34-70a2d0af1af1","Type":"ContainerStarted","Data":"9596bc9f0d3c52c86e25051a44114fef18caae47d8a09046b38a088865ba0fd1"} Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.963734 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:24 crc kubenswrapper[4979]: I0130 21:55:24.964325 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:26 crc kubenswrapper[4979]: I0130 21:55:26.007793 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fhr5r" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="registry-server" probeResult="failure" output=< Jan 30 21:55:26 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 21:55:26 crc kubenswrapper[4979]: > Jan 30 21:55:28 crc kubenswrapper[4979]: I0130 21:55:28.458172 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" event={"ID":"949791a2-d4bd-4ec8-8e34-70a2d0af1af1","Type":"ContainerStarted","Data":"c6e8b443a5be98f70ec81521d22fa2448f8261d24e12dffec37095d2f1d194e7"} Jan 30 21:55:28 crc kubenswrapper[4979]: I0130 21:55:28.485736 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-tv5t2" podStartSLOduration=2.177794615 podStartE2EDuration="5.485697904s" podCreationTimestamp="2026-01-30 21:55:23 +0000 UTC" firstStartedPulling="2026-01-30 21:55:24.385900775 +0000 UTC m=+920.347147818" lastFinishedPulling="2026-01-30 21:55:27.693804034 +0000 UTC m=+923.655051107" observedRunningTime="2026-01-30 21:55:28.480233756 +0000 UTC m=+924.441480829" watchObservedRunningTime="2026-01-30 21:55:28.485697904 +0000 UTC m=+924.446944977" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.040344 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.040884 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.440289 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nqwmx"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.441536 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.451794 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-56xgf" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.469719 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nqwmx"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.526014 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.527026 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.529312 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.529415 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2xs54"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.529921 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.530266 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89btw\" (UniqueName: \"kubernetes.io/projected/f03646b0-8776-45cc-9594-a0266af57be5-kube-api-access-89btw\") pod \"nmstate-metrics-54757c584b-nqwmx\" (UID: \"f03646b0-8776-45cc-9594-a0266af57be5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.561902 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631731 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-dbus-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631793 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89btw\" (UniqueName: \"kubernetes.io/projected/f03646b0-8776-45cc-9594-a0266af57be5-kube-api-access-89btw\") pod \"nmstate-metrics-54757c584b-nqwmx\" (UID: \"f03646b0-8776-45cc-9594-a0266af57be5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631834 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9dm\" (UniqueName: \"kubernetes.io/projected/63bf7e31-b607-4b21-9753-eb05a7bfb987-kube-api-access-5n9dm\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631871 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrt4j\" (UniqueName: \"kubernetes.io/projected/2bf07cc3-611c-44b3-9fd0-831f5b718f11-kube-api-access-zrt4j\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631900 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/63bf7e31-b607-4b21-9753-eb05a7bfb987-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631915 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-ovs-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.631934 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-nmstate-lock\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.665118 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.666219 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.668317 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.669184 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.669345 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2pmmn" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.671586 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89btw\" (UniqueName: \"kubernetes.io/projected/f03646b0-8776-45cc-9594-a0266af57be5-kube-api-access-89btw\") pod \"nmstate-metrics-54757c584b-nqwmx\" (UID: \"f03646b0-8776-45cc-9594-a0266af57be5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.704860 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736608 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/63bf7e31-b607-4b21-9753-eb05a7bfb987-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736684 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-ovs-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736719 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-nmstate-lock\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736753 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-dbus-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbl6s\" (UniqueName: \"kubernetes.io/projected/4e67f5da-565e-4850-ac22-136965b5e12d-kube-api-access-xbl6s\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-ovs-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736829 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-nmstate-lock\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736842 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n9dm\" (UniqueName: \"kubernetes.io/projected/63bf7e31-b607-4b21-9753-eb05a7bfb987-kube-api-access-5n9dm\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.736954 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67f5da-565e-4850-ac22-136965b5e12d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.737039 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrt4j\" (UniqueName: \"kubernetes.io/projected/2bf07cc3-611c-44b3-9fd0-831f5b718f11-kube-api-access-zrt4j\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.737121 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4e67f5da-565e-4850-ac22-136965b5e12d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.737148 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2bf07cc3-611c-44b3-9fd0-831f5b718f11-dbus-socket\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.753971 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n9dm\" (UniqueName: \"kubernetes.io/projected/63bf7e31-b607-4b21-9753-eb05a7bfb987-kube-api-access-5n9dm\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.755652 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/63bf7e31-b607-4b21-9753-eb05a7bfb987-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f7cxj\" (UID: \"63bf7e31-b607-4b21-9753-eb05a7bfb987\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.764992 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrt4j\" (UniqueName: \"kubernetes.io/projected/2bf07cc3-611c-44b3-9fd0-831f5b718f11-kube-api-access-zrt4j\") pod \"nmstate-handler-2xs54\" (UID: \"2bf07cc3-611c-44b3-9fd0-831f5b718f11\") " pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.765446 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.838148 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67f5da-565e-4850-ac22-136965b5e12d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.838240 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4e67f5da-565e-4850-ac22-136965b5e12d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.838328 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbl6s\" (UniqueName: \"kubernetes.io/projected/4e67f5da-565e-4850-ac22-136965b5e12d-kube-api-access-xbl6s\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.840461 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4e67f5da-565e-4850-ac22-136965b5e12d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.848161 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.848709 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e67f5da-565e-4850-ac22-136965b5e12d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.857476 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbl6s\" (UniqueName: \"kubernetes.io/projected/4e67f5da-565e-4850-ac22-136965b5e12d-kube-api-access-xbl6s\") pod \"nmstate-console-plugin-7754f76f8b-84fjt\" (UID: \"4e67f5da-565e-4850-ac22-136965b5e12d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.866570 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.883129 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7fcb4db5f4-754dt"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.883968 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: W0130 21:55:32.897083 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bf07cc3_611c_44b3_9fd0_831f5b718f11.slice/crio-20ab7676315cb1ed54a5dcc044e5d977057045442eace92709ffd362edd3ffe3 WatchSource:0}: Error finding container 20ab7676315cb1ed54a5dcc044e5d977057045442eace92709ffd362edd3ffe3: Status 404 returned error can't find the container with id 20ab7676315cb1ed54a5dcc044e5d977057045442eace92709ffd362edd3ffe3 Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.898872 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fcb4db5f4-754dt"] Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.940245 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-service-ca\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.940891 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-trusted-ca-bundle\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.941014 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-console-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.941073 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-oauth-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.941538 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfmld\" (UniqueName: \"kubernetes.io/projected/7afff541-d8aa-462f-b084-a80ff0e2729a-kube-api-access-cfmld\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.941608 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:32 crc kubenswrapper[4979]: I0130 21:55:32.941665 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-oauth-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.002582 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.042503 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfmld\" (UniqueName: \"kubernetes.io/projected/7afff541-d8aa-462f-b084-a80ff0e2729a-kube-api-access-cfmld\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.042577 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.042610 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-oauth-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.042636 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-service-ca\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.043844 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-service-ca\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.042658 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-trusted-ca-bundle\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.043913 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-console-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.043947 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-oauth-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.044599 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-oauth-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.045194 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-console-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.048824 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afff541-d8aa-462f-b084-a80ff0e2729a-trusted-ca-bundle\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.050865 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-serving-cert\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.052633 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afff541-d8aa-462f-b084-a80ff0e2729a-console-oauth-config\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.064737 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfmld\" (UniqueName: \"kubernetes.io/projected/7afff541-d8aa-462f-b084-a80ff0e2729a-kube-api-access-cfmld\") pod \"console-7fcb4db5f4-754dt\" (UID: \"7afff541-d8aa-462f-b084-a80ff0e2729a\") " pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.102316 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nqwmx"] Jan 30 21:55:33 crc kubenswrapper[4979]: W0130 21:55:33.113010 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf03646b0_8776_45cc_9594_a0266af57be5.slice/crio-8de01a2471ff5bf2a6e56629c073e1d921f5b8a61ac310a1f754d231b33a6a44 WatchSource:0}: Error finding container 8de01a2471ff5bf2a6e56629c073e1d921f5b8a61ac310a1f754d231b33a6a44: Status 404 returned error can't find the container with id 8de01a2471ff5bf2a6e56629c073e1d921f5b8a61ac310a1f754d231b33a6a44 Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.160817 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj"] Jan 30 21:55:33 crc kubenswrapper[4979]: W0130 21:55:33.163828 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63bf7e31_b607_4b21_9753_eb05a7bfb987.slice/crio-797aef8e80098aef45b869f5b42b25f31c19aad257c099b64f24c3e6bb0bab98 WatchSource:0}: Error finding container 797aef8e80098aef45b869f5b42b25f31c19aad257c099b64f24c3e6bb0bab98: Status 404 returned error can't find the container with id 797aef8e80098aef45b869f5b42b25f31c19aad257c099b64f24c3e6bb0bab98 Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.209761 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.246596 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt"] Jan 30 21:55:33 crc kubenswrapper[4979]: W0130 21:55:33.261540 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e67f5da_565e_4850_ac22_136965b5e12d.slice/crio-98e7fa5613739a578da818520240a3813c287ea626929338375971afff991ad5 WatchSource:0}: Error finding container 98e7fa5613739a578da818520240a3813c287ea626929338375971afff991ad5: Status 404 returned error can't find the container with id 98e7fa5613739a578da818520240a3813c287ea626929338375971afff991ad5 Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.422329 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fcb4db5f4-754dt"] Jan 30 21:55:33 crc kubenswrapper[4979]: W0130 21:55:33.430189 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7afff541_d8aa_462f_b084_a80ff0e2729a.slice/crio-a68edb0755657a86e723939a3c78152a737104b0a106b3933c8697033a67af67 WatchSource:0}: Error finding container a68edb0755657a86e723939a3c78152a737104b0a106b3933c8697033a67af67: Status 404 returned error can't find the container with id a68edb0755657a86e723939a3c78152a737104b0a106b3933c8697033a67af67 Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.509213 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fcb4db5f4-754dt" event={"ID":"7afff541-d8aa-462f-b084-a80ff0e2729a","Type":"ContainerStarted","Data":"a68edb0755657a86e723939a3c78152a737104b0a106b3933c8697033a67af67"} Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.510524 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" event={"ID":"63bf7e31-b607-4b21-9753-eb05a7bfb987","Type":"ContainerStarted","Data":"797aef8e80098aef45b869f5b42b25f31c19aad257c099b64f24c3e6bb0bab98"} Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.511581 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" event={"ID":"f03646b0-8776-45cc-9594-a0266af57be5","Type":"ContainerStarted","Data":"8de01a2471ff5bf2a6e56629c073e1d921f5b8a61ac310a1f754d231b33a6a44"} Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.513054 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2xs54" event={"ID":"2bf07cc3-611c-44b3-9fd0-831f5b718f11","Type":"ContainerStarted","Data":"20ab7676315cb1ed54a5dcc044e5d977057045442eace92709ffd362edd3ffe3"} Jan 30 21:55:33 crc kubenswrapper[4979]: I0130 21:55:33.514206 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" event={"ID":"4e67f5da-565e-4850-ac22-136965b5e12d","Type":"ContainerStarted","Data":"98e7fa5613739a578da818520240a3813c287ea626929338375971afff991ad5"} Jan 30 21:55:34 crc kubenswrapper[4979]: I0130 21:55:34.523496 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fcb4db5f4-754dt" event={"ID":"7afff541-d8aa-462f-b084-a80ff0e2729a","Type":"ContainerStarted","Data":"8b8cd7510018dc6cee8c7141f201dc88ce9b9c6adabeb22011cdcf928c6a0a0d"} Jan 30 21:55:34 crc kubenswrapper[4979]: I0130 21:55:34.543531 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7fcb4db5f4-754dt" podStartSLOduration=2.543504294 podStartE2EDuration="2.543504294s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:55:34.54114563 +0000 UTC m=+930.502392683" watchObservedRunningTime="2026-01-30 21:55:34.543504294 +0000 UTC m=+930.504751327" Jan 30 21:55:35 crc kubenswrapper[4979]: I0130 21:55:35.007538 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:35 crc kubenswrapper[4979]: I0130 21:55:35.050178 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:35 crc kubenswrapper[4979]: I0130 21:55:35.245513 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.551150 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" event={"ID":"f03646b0-8776-45cc-9594-a0266af57be5","Type":"ContainerStarted","Data":"1be464082ec264ba4485332f2f612c00b20c867850f4ace43f8c1b286b7d62b0"} Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.554631 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.557324 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" event={"ID":"4e67f5da-565e-4850-ac22-136965b5e12d","Type":"ContainerStarted","Data":"2b82c4f4be2eb38ccaa379a4e0ec585c8e20bae3ce80beade4ed93f9c0d714a7"} Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.560062 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fhr5r" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="registry-server" containerID="cri-o://3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742" gracePeriod=2 Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.560686 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" event={"ID":"63bf7e31-b607-4b21-9753-eb05a7bfb987","Type":"ContainerStarted","Data":"96a2fb0ce51d7d5ca9c15bc7ec31b57f4881e3e86dc1a6042725b8dc07c14654"} Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.565672 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.580715 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2xs54" podStartSLOduration=1.282589291 podStartE2EDuration="4.580687225s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="2026-01-30 21:55:32.906075091 +0000 UTC m=+928.867322124" lastFinishedPulling="2026-01-30 21:55:36.204172985 +0000 UTC m=+932.165420058" observedRunningTime="2026-01-30 21:55:36.578020593 +0000 UTC m=+932.539267626" watchObservedRunningTime="2026-01-30 21:55:36.580687225 +0000 UTC m=+932.541934258" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.599303 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-84fjt" podStartSLOduration=1.667536389 podStartE2EDuration="4.599277367s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="2026-01-30 21:55:33.264675227 +0000 UTC m=+929.225922260" lastFinishedPulling="2026-01-30 21:55:36.196416205 +0000 UTC m=+932.157663238" observedRunningTime="2026-01-30 21:55:36.597805278 +0000 UTC m=+932.559052311" watchObservedRunningTime="2026-01-30 21:55:36.599277367 +0000 UTC m=+932.560524400" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.672552 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" podStartSLOduration=1.634266437 podStartE2EDuration="4.672534265s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="2026-01-30 21:55:33.165999349 +0000 UTC m=+929.127246382" lastFinishedPulling="2026-01-30 21:55:36.204267167 +0000 UTC m=+932.165514210" observedRunningTime="2026-01-30 21:55:36.669632107 +0000 UTC m=+932.630879140" watchObservedRunningTime="2026-01-30 21:55:36.672534265 +0000 UTC m=+932.633781298" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.892149 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.904263 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqdjn\" (UniqueName: \"kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn\") pod \"673080e1-83e2-49f1-9c9a-713fb9367bea\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.904342 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content\") pod \"673080e1-83e2-49f1-9c9a-713fb9367bea\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.904374 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities\") pod \"673080e1-83e2-49f1-9c9a-713fb9367bea\" (UID: \"673080e1-83e2-49f1-9c9a-713fb9367bea\") " Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.905449 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities" (OuterVolumeSpecName: "utilities") pod "673080e1-83e2-49f1-9c9a-713fb9367bea" (UID: "673080e1-83e2-49f1-9c9a-713fb9367bea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:36 crc kubenswrapper[4979]: I0130 21:55:36.911250 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn" (OuterVolumeSpecName: "kube-api-access-nqdjn") pod "673080e1-83e2-49f1-9c9a-713fb9367bea" (UID: "673080e1-83e2-49f1-9c9a-713fb9367bea"). InnerVolumeSpecName "kube-api-access-nqdjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.006089 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqdjn\" (UniqueName: \"kubernetes.io/projected/673080e1-83e2-49f1-9c9a-713fb9367bea-kube-api-access-nqdjn\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.006122 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.023778 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "673080e1-83e2-49f1-9c9a-713fb9367bea" (UID: "673080e1-83e2-49f1-9c9a-713fb9367bea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.107226 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/673080e1-83e2-49f1-9c9a-713fb9367bea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.572879 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2xs54" event={"ID":"2bf07cc3-611c-44b3-9fd0-831f5b718f11","Type":"ContainerStarted","Data":"e0900ace803ae06e5ce574c7da1537cc845a405ccd01943677907a19b83308de"} Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.577005 4979 generic.go:334] "Generic (PLEG): container finished" podID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerID="3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742" exitCode=0 Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.577167 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhr5r" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.577095 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerDied","Data":"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742"} Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.577364 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhr5r" event={"ID":"673080e1-83e2-49f1-9c9a-713fb9367bea","Type":"ContainerDied","Data":"eeeef076cf56a5e32adddbc1f962f1d02924f51e813862b47bf3524625754d23"} Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.577407 4979 scope.go:117] "RemoveContainer" containerID="3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.604646 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.613493 4979 scope.go:117] "RemoveContainer" containerID="3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.619332 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fhr5r"] Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.646621 4979 scope.go:117] "RemoveContainer" containerID="c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.667109 4979 scope.go:117] "RemoveContainer" containerID="3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742" Jan 30 21:55:37 crc kubenswrapper[4979]: E0130 21:55:37.667677 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742\": container with ID starting with 3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742 not found: ID does not exist" containerID="3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.667751 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742"} err="failed to get container status \"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742\": rpc error: code = NotFound desc = could not find container \"3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742\": container with ID starting with 3b7a4305be4b02a396701fd34b31be6b8ed432dd1ca54e07e12f291a658d2742 not found: ID does not exist" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.667805 4979 scope.go:117] "RemoveContainer" containerID="3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60" Jan 30 21:55:37 crc kubenswrapper[4979]: E0130 21:55:37.668350 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60\": container with ID starting with 3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60 not found: ID does not exist" containerID="3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.668412 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60"} err="failed to get container status \"3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60\": rpc error: code = NotFound desc = could not find container \"3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60\": container with ID starting with 3f23df6fe361f43083bbd3a5ec0cd48500e616e388cc75cd21f65e1d34e57c60 not found: ID does not exist" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.668478 4979 scope.go:117] "RemoveContainer" containerID="c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988" Jan 30 21:55:37 crc kubenswrapper[4979]: E0130 21:55:37.668891 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988\": container with ID starting with c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988 not found: ID does not exist" containerID="c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988" Jan 30 21:55:37 crc kubenswrapper[4979]: I0130 21:55:37.668939 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988"} err="failed to get container status \"c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988\": rpc error: code = NotFound desc = could not find container \"c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988\": container with ID starting with c47da80113641159eaf9bec9132d43c911868536708fb6385993cd0b399a9988 not found: ID does not exist" Jan 30 21:55:39 crc kubenswrapper[4979]: I0130 21:55:39.080272 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" path="/var/lib/kubelet/pods/673080e1-83e2-49f1-9c9a-713fb9367bea/volumes" Jan 30 21:55:39 crc kubenswrapper[4979]: I0130 21:55:39.598821 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" event={"ID":"f03646b0-8776-45cc-9594-a0266af57be5","Type":"ContainerStarted","Data":"eb4cb0a8dff540b8fffe7f7f9ab6fc9f60dd806a7846372c1196b89789e74b15"} Jan 30 21:55:39 crc kubenswrapper[4979]: I0130 21:55:39.628872 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-nqwmx" podStartSLOduration=2.0151886 podStartE2EDuration="7.62884099s" podCreationTimestamp="2026-01-30 21:55:32 +0000 UTC" firstStartedPulling="2026-01-30 21:55:33.116410949 +0000 UTC m=+929.077657982" lastFinishedPulling="2026-01-30 21:55:38.730063349 +0000 UTC m=+934.691310372" observedRunningTime="2026-01-30 21:55:39.622677583 +0000 UTC m=+935.583924646" watchObservedRunningTime="2026-01-30 21:55:39.62884099 +0000 UTC m=+935.590088033" Jan 30 21:55:42 crc kubenswrapper[4979]: I0130 21:55:42.905122 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2xs54" Jan 30 21:55:43 crc kubenswrapper[4979]: I0130 21:55:43.210066 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:43 crc kubenswrapper[4979]: I0130 21:55:43.210340 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:43 crc kubenswrapper[4979]: I0130 21:55:43.217298 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:43 crc kubenswrapper[4979]: I0130 21:55:43.635025 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7fcb4db5f4-754dt" Jan 30 21:55:43 crc kubenswrapper[4979]: I0130 21:55:43.715598 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:55:52 crc kubenswrapper[4979]: I0130 21:55:52.859196 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f7cxj" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.413715 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:55:53 crc kubenswrapper[4979]: E0130 21:55:53.414422 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="registry-server" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.414438 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="registry-server" Jan 30 21:55:53 crc kubenswrapper[4979]: E0130 21:55:53.414449 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="extract-content" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.414457 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="extract-content" Jan 30 21:55:53 crc kubenswrapper[4979]: E0130 21:55:53.414468 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="extract-utilities" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.414480 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="extract-utilities" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.414606 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="673080e1-83e2-49f1-9c9a-713fb9367bea" containerName="registry-server" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.415569 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.442213 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.583758 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.583877 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.584180 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfnm5\" (UniqueName: \"kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.685622 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.685714 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.685807 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfnm5\" (UniqueName: \"kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.686420 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.686529 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.733880 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfnm5\" (UniqueName: \"kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5\") pod \"community-operators-s2ltp\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:53 crc kubenswrapper[4979]: I0130 21:55:53.740718 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:55:54 crc kubenswrapper[4979]: I0130 21:55:54.274605 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:55:54 crc kubenswrapper[4979]: I0130 21:55:54.723266 4979 generic.go:334] "Generic (PLEG): container finished" podID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerID="fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53" exitCode=0 Jan 30 21:55:54 crc kubenswrapper[4979]: I0130 21:55:54.723331 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerDied","Data":"fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53"} Jan 30 21:55:54 crc kubenswrapper[4979]: I0130 21:55:54.723370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerStarted","Data":"89992e57bfeaf566ee1898ace499fb3c14b6f08c56f4e7414987da47cef73f72"} Jan 30 21:55:56 crc kubenswrapper[4979]: I0130 21:55:56.745951 4979 generic.go:334] "Generic (PLEG): container finished" podID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerID="b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1" exitCode=0 Jan 30 21:55:56 crc kubenswrapper[4979]: I0130 21:55:56.746068 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerDied","Data":"b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1"} Jan 30 21:55:57 crc kubenswrapper[4979]: I0130 21:55:57.760105 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerStarted","Data":"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855"} Jan 30 21:55:57 crc kubenswrapper[4979]: I0130 21:55:57.784431 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s2ltp" podStartSLOduration=2.369773416 podStartE2EDuration="4.784404863s" podCreationTimestamp="2026-01-30 21:55:53 +0000 UTC" firstStartedPulling="2026-01-30 21:55:54.726133734 +0000 UTC m=+950.687380767" lastFinishedPulling="2026-01-30 21:55:57.140765161 +0000 UTC m=+953.102012214" observedRunningTime="2026-01-30 21:55:57.782376668 +0000 UTC m=+953.743623701" watchObservedRunningTime="2026-01-30 21:55:57.784404863 +0000 UTC m=+953.745651886" Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.039889 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.040979 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.041077 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.041988 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.042072 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28" gracePeriod=600 Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.801995 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28" exitCode=0 Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.802065 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28"} Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.802970 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff"} Jan 30 21:56:02 crc kubenswrapper[4979]: I0130 21:56:02.803002 4979 scope.go:117] "RemoveContainer" containerID="bb31c8508ba9d5d13bdcaefa52c28a222060abce65ea336c482658b625bc9222" Jan 30 21:56:03 crc kubenswrapper[4979]: I0130 21:56:03.742502 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:03 crc kubenswrapper[4979]: I0130 21:56:03.743572 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:03 crc kubenswrapper[4979]: I0130 21:56:03.815976 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:03 crc kubenswrapper[4979]: I0130 21:56:03.874198 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:04 crc kubenswrapper[4979]: I0130 21:56:04.062633 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:56:05 crc kubenswrapper[4979]: I0130 21:56:05.837980 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s2ltp" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="registry-server" containerID="cri-o://f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855" gracePeriod=2 Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.267869 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.417193 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities\") pod \"c204f004-5a44-4602-9a51-b1364cd9e46f\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.417349 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content\") pod \"c204f004-5a44-4602-9a51-b1364cd9e46f\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.417471 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfnm5\" (UniqueName: \"kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5\") pod \"c204f004-5a44-4602-9a51-b1364cd9e46f\" (UID: \"c204f004-5a44-4602-9a51-b1364cd9e46f\") " Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.419260 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities" (OuterVolumeSpecName: "utilities") pod "c204f004-5a44-4602-9a51-b1364cd9e46f" (UID: "c204f004-5a44-4602-9a51-b1364cd9e46f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.425571 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5" (OuterVolumeSpecName: "kube-api-access-kfnm5") pod "c204f004-5a44-4602-9a51-b1364cd9e46f" (UID: "c204f004-5a44-4602-9a51-b1364cd9e46f"). InnerVolumeSpecName "kube-api-access-kfnm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.484835 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c204f004-5a44-4602-9a51-b1364cd9e46f" (UID: "c204f004-5a44-4602-9a51-b1364cd9e46f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.519019 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.519355 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfnm5\" (UniqueName: \"kubernetes.io/projected/c204f004-5a44-4602-9a51-b1364cd9e46f-kube-api-access-kfnm5\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.519524 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204f004-5a44-4602-9a51-b1364cd9e46f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.847934 4979 generic.go:334] "Generic (PLEG): container finished" podID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerID="f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855" exitCode=0 Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.848563 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ltp" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.848499 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerDied","Data":"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855"} Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.850656 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ltp" event={"ID":"c204f004-5a44-4602-9a51-b1364cd9e46f","Type":"ContainerDied","Data":"89992e57bfeaf566ee1898ace499fb3c14b6f08c56f4e7414987da47cef73f72"} Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.850846 4979 scope.go:117] "RemoveContainer" containerID="f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.895260 4979 scope.go:117] "RemoveContainer" containerID="b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.904635 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.908845 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s2ltp"] Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.925294 4979 scope.go:117] "RemoveContainer" containerID="fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.949299 4979 scope.go:117] "RemoveContainer" containerID="f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855" Jan 30 21:56:06 crc kubenswrapper[4979]: E0130 21:56:06.950098 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855\": container with ID starting with f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855 not found: ID does not exist" containerID="f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.950231 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855"} err="failed to get container status \"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855\": rpc error: code = NotFound desc = could not find container \"f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855\": container with ID starting with f07927ac9e3ecb37ab40b771014bc2dafbb28db956ffa48a4864d3ff5a312855 not found: ID does not exist" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.950328 4979 scope.go:117] "RemoveContainer" containerID="b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1" Jan 30 21:56:06 crc kubenswrapper[4979]: E0130 21:56:06.950919 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1\": container with ID starting with b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1 not found: ID does not exist" containerID="b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.951004 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1"} err="failed to get container status \"b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1\": rpc error: code = NotFound desc = could not find container \"b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1\": container with ID starting with b056f0f33a850093ed8dfa8b7335c0cd8408ea8d6310a50dda2aed1201f98ec1 not found: ID does not exist" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.951088 4979 scope.go:117] "RemoveContainer" containerID="fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53" Jan 30 21:56:06 crc kubenswrapper[4979]: E0130 21:56:06.951538 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53\": container with ID starting with fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53 not found: ID does not exist" containerID="fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53" Jan 30 21:56:06 crc kubenswrapper[4979]: I0130 21:56:06.951618 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53"} err="failed to get container status \"fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53\": rpc error: code = NotFound desc = could not find container \"fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53\": container with ID starting with fca22e2c74917e8334de369bfa8a5ba96a830b0e134947895f0cc493756bdf53 not found: ID does not exist" Jan 30 21:56:07 crc kubenswrapper[4979]: I0130 21:56:07.083895 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" path="/var/lib/kubelet/pods/c204f004-5a44-4602-9a51-b1364cd9e46f/volumes" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.117152 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr"] Jan 30 21:56:08 crc kubenswrapper[4979]: E0130 21:56:08.118920 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="extract-content" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.118986 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="extract-content" Jan 30 21:56:08 crc kubenswrapper[4979]: E0130 21:56:08.119059 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="extract-utilities" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.119125 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="extract-utilities" Jan 30 21:56:08 crc kubenswrapper[4979]: E0130 21:56:08.119180 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="registry-server" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.119257 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="registry-server" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.119407 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c204f004-5a44-4602-9a51-b1364cd9e46f" containerName="registry-server" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.120298 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.123625 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.135692 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr"] Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.172214 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnf25\" (UniqueName: \"kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.172293 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.172595 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.274321 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnf25\" (UniqueName: \"kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.274378 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.274437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.274923 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.275074 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.299422 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnf25\" (UniqueName: \"kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.435349 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.667566 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr"] Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.792324 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-h6sv5" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" containerID="cri-o://37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322" gracePeriod=15 Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.862189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" event={"ID":"3a16a524-cbae-4652-8fbd-e0b2430ec7d5","Type":"ContainerStarted","Data":"fa202ddbc836e952b85b3227e8f9d2ef0dfbc5d3f331b0b87d6066b738c774c0"} Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.886826 4979 patch_prober.go:28] interesting pod/console-f9d7485db-h6sv5 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 21:56:08 crc kubenswrapper[4979]: I0130 21:56:08.887734 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-h6sv5" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.147850 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h6sv5_cc25d794-4ead-4436-a026-179f655c13d4/console/0.log" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.147932 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.286899 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287012 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287059 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287119 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287991 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca" (OuterVolumeSpecName: "service-ca") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287994 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config" (OuterVolumeSpecName: "console-config") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288280 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.287229 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqg47\" (UniqueName: \"kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288384 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288438 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config\") pod \"cc25d794-4ead-4436-a026-179f655c13d4\" (UID: \"cc25d794-4ead-4436-a026-179f655c13d4\") " Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288767 4979 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288786 4979 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.288799 4979 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.289516 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.295559 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.295627 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47" (OuterVolumeSpecName: "kube-api-access-bqg47") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "kube-api-access-bqg47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.295851 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cc25d794-4ead-4436-a026-179f655c13d4" (UID: "cc25d794-4ead-4436-a026-179f655c13d4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.390154 4979 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc25d794-4ead-4436-a026-179f655c13d4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.390213 4979 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.390229 4979 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc25d794-4ead-4436-a026-179f655c13d4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.390241 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqg47\" (UniqueName: \"kubernetes.io/projected/cc25d794-4ead-4436-a026-179f655c13d4-kube-api-access-bqg47\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.871014 4979 generic.go:334] "Generic (PLEG): container finished" podID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerID="8964675dcc3f2890a07af98a3b878fffe8f0f13a5c075275dcf5b2e35d16b550" exitCode=0 Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.871125 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" event={"ID":"3a16a524-cbae-4652-8fbd-e0b2430ec7d5","Type":"ContainerDied","Data":"8964675dcc3f2890a07af98a3b878fffe8f0f13a5c075275dcf5b2e35d16b550"} Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.873735 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h6sv5_cc25d794-4ead-4436-a026-179f655c13d4/console/0.log" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.873890 4979 generic.go:334] "Generic (PLEG): container finished" podID="cc25d794-4ead-4436-a026-179f655c13d4" containerID="37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322" exitCode=2 Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.873955 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h6sv5" event={"ID":"cc25d794-4ead-4436-a026-179f655c13d4","Type":"ContainerDied","Data":"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322"} Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.874006 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h6sv5" event={"ID":"cc25d794-4ead-4436-a026-179f655c13d4","Type":"ContainerDied","Data":"964c8b1ba5415a6ffab5411d004a571cd2b1dc55669379c6f25606fce00667e5"} Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.874068 4979 scope.go:117] "RemoveContainer" containerID="37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.874298 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h6sv5" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.904817 4979 scope.go:117] "RemoveContainer" containerID="37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322" Jan 30 21:56:09 crc kubenswrapper[4979]: E0130 21:56:09.905430 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322\": container with ID starting with 37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322 not found: ID does not exist" containerID="37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.905482 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322"} err="failed to get container status \"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322\": rpc error: code = NotFound desc = could not find container \"37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322\": container with ID starting with 37b46b2370ce613b01e9659300787a6fde85290f0dfbf92fac5b79983067d322 not found: ID does not exist" Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.928246 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:56:09 crc kubenswrapper[4979]: I0130 21:56:09.931624 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-h6sv5"] Jan 30 21:56:11 crc kubenswrapper[4979]: I0130 21:56:11.080286 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc25d794-4ead-4436-a026-179f655c13d4" path="/var/lib/kubelet/pods/cc25d794-4ead-4436-a026-179f655c13d4/volumes" Jan 30 21:56:11 crc kubenswrapper[4979]: I0130 21:56:11.895322 4979 generic.go:334] "Generic (PLEG): container finished" podID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerID="ee4556dc4a0b4ab431233fc4bfd44f5fc7311a133dbffc20bdc184a0cc538ac8" exitCode=0 Jan 30 21:56:11 crc kubenswrapper[4979]: I0130 21:56:11.895416 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" event={"ID":"3a16a524-cbae-4652-8fbd-e0b2430ec7d5","Type":"ContainerDied","Data":"ee4556dc4a0b4ab431233fc4bfd44f5fc7311a133dbffc20bdc184a0cc538ac8"} Jan 30 21:56:12 crc kubenswrapper[4979]: I0130 21:56:12.906551 4979 generic.go:334] "Generic (PLEG): container finished" podID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerID="80c9599fd060a9e7859794828d9e70447ffbf0c97210fa24ffebe82f93ce1f27" exitCode=0 Jan 30 21:56:12 crc kubenswrapper[4979]: I0130 21:56:12.906656 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" event={"ID":"3a16a524-cbae-4652-8fbd-e0b2430ec7d5","Type":"ContainerDied","Data":"80c9599fd060a9e7859794828d9e70447ffbf0c97210fa24ffebe82f93ce1f27"} Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.232515 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.362385 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util\") pod \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.362507 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnf25\" (UniqueName: \"kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25\") pod \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.362814 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle\") pod \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\" (UID: \"3a16a524-cbae-4652-8fbd-e0b2430ec7d5\") " Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.364693 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle" (OuterVolumeSpecName: "bundle") pod "3a16a524-cbae-4652-8fbd-e0b2430ec7d5" (UID: "3a16a524-cbae-4652-8fbd-e0b2430ec7d5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.373939 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25" (OuterVolumeSpecName: "kube-api-access-jnf25") pod "3a16a524-cbae-4652-8fbd-e0b2430ec7d5" (UID: "3a16a524-cbae-4652-8fbd-e0b2430ec7d5"). InnerVolumeSpecName "kube-api-access-jnf25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.457203 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util" (OuterVolumeSpecName: "util") pod "3a16a524-cbae-4652-8fbd-e0b2430ec7d5" (UID: "3a16a524-cbae-4652-8fbd-e0b2430ec7d5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.466277 4979 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.466363 4979 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.466450 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnf25\" (UniqueName: \"kubernetes.io/projected/3a16a524-cbae-4652-8fbd-e0b2430ec7d5-kube-api-access-jnf25\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.927961 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" event={"ID":"3a16a524-cbae-4652-8fbd-e0b2430ec7d5","Type":"ContainerDied","Data":"fa202ddbc836e952b85b3227e8f9d2ef0dfbc5d3f331b0b87d6066b738c774c0"} Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.928029 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa202ddbc836e952b85b3227e8f9d2ef0dfbc5d3f331b0b87d6066b738c774c0" Jan 30 21:56:14 crc kubenswrapper[4979]: I0130 21:56:14.928138 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.461371 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s"] Jan 30 21:56:24 crc kubenswrapper[4979]: E0130 21:56:24.462404 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="pull" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462419 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="pull" Jan 30 21:56:24 crc kubenswrapper[4979]: E0130 21:56:24.462429 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="util" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462435 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="util" Jan 30 21:56:24 crc kubenswrapper[4979]: E0130 21:56:24.462447 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462454 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" Jan 30 21:56:24 crc kubenswrapper[4979]: E0130 21:56:24.462469 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="extract" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462475 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="extract" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462594 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc25d794-4ead-4436-a026-179f655c13d4" containerName="console" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.462604 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a16a524-cbae-4652-8fbd-e0b2430ec7d5" containerName="extract" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.463090 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.465474 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.465815 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.466111 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.466449 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-6rprj" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.468315 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.482978 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s"] Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.636693 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-apiservice-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.636787 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbj25\" (UniqueName: \"kubernetes.io/projected/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-kube-api-access-wbj25\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.636868 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-webhook-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.728417 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2"] Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.729408 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.734268 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dgwfz" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.734814 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.734942 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.738071 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-webhook-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.738151 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-apiservice-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.738191 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbj25\" (UniqueName: \"kubernetes.io/projected/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-kube-api-access-wbj25\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.747202 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-webhook-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.747600 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-apiservice-cert\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.758354 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2"] Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.768001 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbj25\" (UniqueName: \"kubernetes.io/projected/30c6b9df-d3aa-4a9a-807e-93d8b11c9159-kube-api-access-wbj25\") pod \"metallb-operator-controller-manager-68b5d74f6-krw7s\" (UID: \"30c6b9df-d3aa-4a9a-807e-93d8b11c9159\") " pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.781698 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.839537 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-webhook-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.839613 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-apiservice-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.840166 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvld\" (UniqueName: \"kubernetes.io/projected/04d21772-3311-4f78-a621-a66fa5d1cb7d-kube-api-access-hqvld\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.942335 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-webhook-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.942857 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-apiservice-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.942905 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvld\" (UniqueName: \"kubernetes.io/projected/04d21772-3311-4f78-a621-a66fa5d1cb7d-kube-api-access-hqvld\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.950732 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-apiservice-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.951875 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04d21772-3311-4f78-a621-a66fa5d1cb7d-webhook-cert\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:24 crc kubenswrapper[4979]: I0130 21:56:24.972658 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvld\" (UniqueName: \"kubernetes.io/projected/04d21772-3311-4f78-a621-a66fa5d1cb7d-kube-api-access-hqvld\") pod \"metallb-operator-webhook-server-545587bcb5-lxtf2\" (UID: \"04d21772-3311-4f78-a621-a66fa5d1cb7d\") " pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:25 crc kubenswrapper[4979]: I0130 21:56:25.052212 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:25 crc kubenswrapper[4979]: I0130 21:56:25.255750 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s"] Jan 30 21:56:25 crc kubenswrapper[4979]: I0130 21:56:25.320314 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2"] Jan 30 21:56:25 crc kubenswrapper[4979]: W0130 21:56:25.328154 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04d21772_3311_4f78_a621_a66fa5d1cb7d.slice/crio-ef5318694ac81106ee3406fcd07dc0e7c8bef10957fc11baade16e74817dea4a WatchSource:0}: Error finding container ef5318694ac81106ee3406fcd07dc0e7c8bef10957fc11baade16e74817dea4a: Status 404 returned error can't find the container with id ef5318694ac81106ee3406fcd07dc0e7c8bef10957fc11baade16e74817dea4a Jan 30 21:56:26 crc kubenswrapper[4979]: I0130 21:56:26.008449 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" event={"ID":"04d21772-3311-4f78-a621-a66fa5d1cb7d","Type":"ContainerStarted","Data":"ef5318694ac81106ee3406fcd07dc0e7c8bef10957fc11baade16e74817dea4a"} Jan 30 21:56:26 crc kubenswrapper[4979]: I0130 21:56:26.011495 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" event={"ID":"30c6b9df-d3aa-4a9a-807e-93d8b11c9159","Type":"ContainerStarted","Data":"df21423e823fc936c7379b471ffa423360c5676ae5d0ba9918eaa461fc10bfd7"} Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.706520 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.710294 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.711987 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.728664 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.728739 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cz6l\" (UniqueName: \"kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.728802 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.830139 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.830196 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cz6l\" (UniqueName: \"kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.830230 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.830701 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.830929 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:29 crc kubenswrapper[4979]: I0130 21:56:29.863457 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cz6l\" (UniqueName: \"kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l\") pod \"redhat-marketplace-ns9mx\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:30 crc kubenswrapper[4979]: I0130 21:56:30.038158 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:30 crc kubenswrapper[4979]: I0130 21:56:30.046586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" event={"ID":"30c6b9df-d3aa-4a9a-807e-93d8b11c9159","Type":"ContainerStarted","Data":"ae2f7c4d4eaab24befe0ea9c34c811f6ddb50f955e1cf876a5d47dc1ff694d9d"} Jan 30 21:56:30 crc kubenswrapper[4979]: I0130 21:56:30.047422 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:56:30 crc kubenswrapper[4979]: I0130 21:56:30.092913 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" podStartSLOduration=1.583433348 podStartE2EDuration="6.092896972s" podCreationTimestamp="2026-01-30 21:56:24 +0000 UTC" firstStartedPulling="2026-01-30 21:56:25.27322039 +0000 UTC m=+981.234467423" lastFinishedPulling="2026-01-30 21:56:29.782684014 +0000 UTC m=+985.743931047" observedRunningTime="2026-01-30 21:56:30.087810204 +0000 UTC m=+986.049057237" watchObservedRunningTime="2026-01-30 21:56:30.092896972 +0000 UTC m=+986.054144005" Jan 30 21:56:30 crc kubenswrapper[4979]: I0130 21:56:30.332194 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.055462 4979 generic.go:334] "Generic (PLEG): container finished" podID="822b342e-14fa-4653-8217-bea9a32e90aa" containerID="859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637" exitCode=0 Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.055582 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerDied","Data":"859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637"} Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.055644 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerStarted","Data":"fd8ae73f685afdd6826a3e84a0be1355988be291a6aa0ddd82b95f6ef976bfc3"} Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.057051 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" event={"ID":"04d21772-3311-4f78-a621-a66fa5d1cb7d","Type":"ContainerStarted","Data":"a52b3ae95f0623fdc0546910b27ef1aaae547f433daf48c54b8592b2ff3d29f6"} Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.057202 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:31 crc kubenswrapper[4979]: I0130 21:56:31.110991 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" podStartSLOduration=2.655210245 podStartE2EDuration="7.110965929s" podCreationTimestamp="2026-01-30 21:56:24 +0000 UTC" firstStartedPulling="2026-01-30 21:56:25.331648819 +0000 UTC m=+981.292895852" lastFinishedPulling="2026-01-30 21:56:29.787404503 +0000 UTC m=+985.748651536" observedRunningTime="2026-01-30 21:56:31.107738922 +0000 UTC m=+987.068985965" watchObservedRunningTime="2026-01-30 21:56:31.110965929 +0000 UTC m=+987.072212972" Jan 30 21:56:32 crc kubenswrapper[4979]: I0130 21:56:32.066295 4979 generic.go:334] "Generic (PLEG): container finished" podID="822b342e-14fa-4653-8217-bea9a32e90aa" containerID="698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377" exitCode=0 Jan 30 21:56:32 crc kubenswrapper[4979]: I0130 21:56:32.066401 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerDied","Data":"698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377"} Jan 30 21:56:33 crc kubenswrapper[4979]: I0130 21:56:33.080077 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerStarted","Data":"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268"} Jan 30 21:56:33 crc kubenswrapper[4979]: I0130 21:56:33.102311 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ns9mx" podStartSLOduration=2.567109361 podStartE2EDuration="4.102288129s" podCreationTimestamp="2026-01-30 21:56:29 +0000 UTC" firstStartedPulling="2026-01-30 21:56:31.05813311 +0000 UTC m=+987.019380143" lastFinishedPulling="2026-01-30 21:56:32.593311878 +0000 UTC m=+988.554558911" observedRunningTime="2026-01-30 21:56:33.09678875 +0000 UTC m=+989.058035783" watchObservedRunningTime="2026-01-30 21:56:33.102288129 +0000 UTC m=+989.063535162" Jan 30 21:56:40 crc kubenswrapper[4979]: I0130 21:56:40.039795 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:40 crc kubenswrapper[4979]: I0130 21:56:40.040797 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:40 crc kubenswrapper[4979]: I0130 21:56:40.111376 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:40 crc kubenswrapper[4979]: I0130 21:56:40.231546 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:41 crc kubenswrapper[4979]: I0130 21:56:41.249797 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:42 crc kubenswrapper[4979]: I0130 21:56:42.129433 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ns9mx" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="registry-server" containerID="cri-o://5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268" gracePeriod=2 Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.093116 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145186 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cz6l\" (UniqueName: \"kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l\") pod \"822b342e-14fa-4653-8217-bea9a32e90aa\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145263 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities\") pod \"822b342e-14fa-4653-8217-bea9a32e90aa\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145361 4979 generic.go:334] "Generic (PLEG): container finished" podID="822b342e-14fa-4653-8217-bea9a32e90aa" containerID="5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268" exitCode=0 Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145412 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content\") pod \"822b342e-14fa-4653-8217-bea9a32e90aa\" (UID: \"822b342e-14fa-4653-8217-bea9a32e90aa\") " Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145420 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerDied","Data":"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268"} Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145457 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ns9mx" event={"ID":"822b342e-14fa-4653-8217-bea9a32e90aa","Type":"ContainerDied","Data":"fd8ae73f685afdd6826a3e84a0be1355988be291a6aa0ddd82b95f6ef976bfc3"} Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145476 4979 scope.go:117] "RemoveContainer" containerID="5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.145620 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ns9mx" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.148719 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities" (OuterVolumeSpecName: "utilities") pod "822b342e-14fa-4653-8217-bea9a32e90aa" (UID: "822b342e-14fa-4653-8217-bea9a32e90aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.159492 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l" (OuterVolumeSpecName: "kube-api-access-7cz6l") pod "822b342e-14fa-4653-8217-bea9a32e90aa" (UID: "822b342e-14fa-4653-8217-bea9a32e90aa"). InnerVolumeSpecName "kube-api-access-7cz6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.178806 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "822b342e-14fa-4653-8217-bea9a32e90aa" (UID: "822b342e-14fa-4653-8217-bea9a32e90aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.188523 4979 scope.go:117] "RemoveContainer" containerID="698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.206142 4979 scope.go:117] "RemoveContainer" containerID="859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.224331 4979 scope.go:117] "RemoveContainer" containerID="5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268" Jan 30 21:56:44 crc kubenswrapper[4979]: E0130 21:56:44.225083 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268\": container with ID starting with 5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268 not found: ID does not exist" containerID="5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.225150 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268"} err="failed to get container status \"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268\": rpc error: code = NotFound desc = could not find container \"5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268\": container with ID starting with 5abeacdaf293da92065e5ea47ef4884b2dba1e34cce926084332f74c28b7d268 not found: ID does not exist" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.225188 4979 scope.go:117] "RemoveContainer" containerID="698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377" Jan 30 21:56:44 crc kubenswrapper[4979]: E0130 21:56:44.225659 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377\": container with ID starting with 698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377 not found: ID does not exist" containerID="698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.225720 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377"} err="failed to get container status \"698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377\": rpc error: code = NotFound desc = could not find container \"698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377\": container with ID starting with 698b915c1ded04c1a01e26f6cb6f69fd6b4fc98307c347316857b25d55e94377 not found: ID does not exist" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.225769 4979 scope.go:117] "RemoveContainer" containerID="859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637" Jan 30 21:56:44 crc kubenswrapper[4979]: E0130 21:56:44.226201 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637\": container with ID starting with 859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637 not found: ID does not exist" containerID="859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.226248 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637"} err="failed to get container status \"859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637\": rpc error: code = NotFound desc = could not find container \"859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637\": container with ID starting with 859e037eaa42ba0b536cd2364da67276ab9e0f2ca07ad5c955fc19fe278cf637 not found: ID does not exist" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.247793 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.247825 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cz6l\" (UniqueName: \"kubernetes.io/projected/822b342e-14fa-4653-8217-bea9a32e90aa-kube-api-access-7cz6l\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.247835 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b342e-14fa-4653-8217-bea9a32e90aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.487688 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:44 crc kubenswrapper[4979]: I0130 21:56:44.494225 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ns9mx"] Jan 30 21:56:45 crc kubenswrapper[4979]: I0130 21:56:45.058813 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-545587bcb5-lxtf2" Jan 30 21:56:45 crc kubenswrapper[4979]: I0130 21:56:45.078739 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" path="/var/lib/kubelet/pods/822b342e-14fa-4653-8217-bea9a32e90aa/volumes" Jan 30 21:57:04 crc kubenswrapper[4979]: I0130 21:57:04.785675 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-68b5d74f6-krw7s" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.427816 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cnk7l"] Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.428222 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="extract-content" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.428247 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="extract-content" Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.428261 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="extract-utilities" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.428272 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="extract-utilities" Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.428295 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="registry-server" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.428304 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="registry-server" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.428443 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="822b342e-14fa-4653-8217-bea9a32e90aa" containerName="registry-server" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.430891 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.434340 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv"] Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.435602 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.436858 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ghkjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.436866 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.436894 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.441179 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.447811 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv"] Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.542764 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-v2nkx"] Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.557418 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.564692 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7qf65" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.564890 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.564974 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.565070 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.584958 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-6whjn"] Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594661 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594710 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxzf5\" (UniqueName: \"kubernetes.io/projected/edde5f2f-1d96-49c5-aee3-92f1b77ac088-kube-api-access-gxzf5\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594743 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-reloader\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-conf\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594815 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxndl\" (UniqueName: \"kubernetes.io/projected/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-kube-api-access-vxndl\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594836 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-startup\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594864 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-sockets\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594892 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.594913 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.597947 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.602089 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.608620 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6whjn"] Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.696735 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-reloader\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697294 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-conf\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697328 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxndl\" (UniqueName: \"kubernetes.io/projected/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-kube-api-access-vxndl\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697355 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-startup\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697385 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-sockets\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697421 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697449 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697467 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-metrics-certs\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697486 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697509 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6a083acc-78e0-41df-84ad-70c965c7bb5a-metallb-excludel2\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697537 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6vf\" (UniqueName: \"kubernetes.io/projected/6a083acc-78e0-41df-84ad-70c965c7bb5a-kube-api-access-gr6vf\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697566 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.697582 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxzf5\" (UniqueName: \"kubernetes.io/projected/edde5f2f-1d96-49c5-aee3-92f1b77ac088-kube-api-access-gxzf5\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.698330 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-reloader\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.698537 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-conf\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.699529 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-startup\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.699727 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-frr-sockets\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.699803 4979 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.699850 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs podName:edde5f2f-1d96-49c5-aee3-92f1b77ac088 nodeName:}" failed. No retries permitted until 2026-01-30 21:57:06.199836174 +0000 UTC m=+1022.161083207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs") pod "frr-k8s-cnk7l" (UID: "edde5f2f-1d96-49c5-aee3-92f1b77ac088") : secret "frr-k8s-certs-secret" not found Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.702419 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.708605 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.714951 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxndl\" (UniqueName: \"kubernetes.io/projected/f8932bcf-8e7b-4302-a623-ece7abe7d2e2-kube-api-access-vxndl\") pod \"frr-k8s-webhook-server-7df86c4f6c-5bgxv\" (UID: \"f8932bcf-8e7b-4302-a623-ece7abe7d2e2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.716800 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxzf5\" (UniqueName: \"kubernetes.io/projected/edde5f2f-1d96-49c5-aee3-92f1b77ac088-kube-api-access-gxzf5\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.761988 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.798979 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-metrics-certs\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.799524 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.799631 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gglzk\" (UniqueName: \"kubernetes.io/projected/b9bf7d77-b99e-4190-8510-dd0778767e89-kube-api-access-gglzk\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.799720 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-cert\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.799778 4979 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.799814 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-metrics-certs\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: E0130 21:57:05.799889 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist podName:6a083acc-78e0-41df-84ad-70c965c7bb5a nodeName:}" failed. No retries permitted until 2026-01-30 21:57:06.299864509 +0000 UTC m=+1022.261111562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist") pod "speaker-v2nkx" (UID: "6a083acc-78e0-41df-84ad-70c965c7bb5a") : secret "metallb-memberlist" not found Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.800088 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6a083acc-78e0-41df-84ad-70c965c7bb5a-metallb-excludel2\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.800186 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6vf\" (UniqueName: \"kubernetes.io/projected/6a083acc-78e0-41df-84ad-70c965c7bb5a-kube-api-access-gr6vf\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.801271 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6a083acc-78e0-41df-84ad-70c965c7bb5a-metallb-excludel2\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.807499 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-metrics-certs\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.832653 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6vf\" (UniqueName: \"kubernetes.io/projected/6a083acc-78e0-41df-84ad-70c965c7bb5a-kube-api-access-gr6vf\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.901572 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-metrics-certs\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.901653 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gglzk\" (UniqueName: \"kubernetes.io/projected/b9bf7d77-b99e-4190-8510-dd0778767e89-kube-api-access-gglzk\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.901673 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-cert\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.903863 4979 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.908935 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-metrics-certs\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.916592 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9bf7d77-b99e-4190-8510-dd0778767e89-cert\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:05 crc kubenswrapper[4979]: I0130 21:57:05.921693 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gglzk\" (UniqueName: \"kubernetes.io/projected/b9bf7d77-b99e-4190-8510-dd0778767e89-kube-api-access-gglzk\") pod \"controller-6968d8fdc4-6whjn\" (UID: \"b9bf7d77-b99e-4190-8510-dd0778767e89\") " pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.006012 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv"] Jan 30 21:57:06 crc kubenswrapper[4979]: W0130 21:57:06.015540 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8932bcf_8e7b_4302_a623_ece7abe7d2e2.slice/crio-a9c12e95e0ab6793f7cb0f45c8830c4c73e0967866a14dba376b3551eb8e3e26 WatchSource:0}: Error finding container a9c12e95e0ab6793f7cb0f45c8830c4c73e0967866a14dba376b3551eb8e3e26: Status 404 returned error can't find the container with id a9c12e95e0ab6793f7cb0f45c8830c4c73e0967866a14dba376b3551eb8e3e26 Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.205736 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.210991 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edde5f2f-1d96-49c5-aee3-92f1b77ac088-metrics-certs\") pod \"frr-k8s-cnk7l\" (UID: \"edde5f2f-1d96-49c5-aee3-92f1b77ac088\") " pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.212580 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.307256 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:06 crc kubenswrapper[4979]: E0130 21:57:06.307455 4979 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 21:57:06 crc kubenswrapper[4979]: E0130 21:57:06.307538 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist podName:6a083acc-78e0-41df-84ad-70c965c7bb5a nodeName:}" failed. No retries permitted until 2026-01-30 21:57:07.307506004 +0000 UTC m=+1023.268753037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist") pod "speaker-v2nkx" (UID: "6a083acc-78e0-41df-84ad-70c965c7bb5a") : secret "metallb-memberlist" not found Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.322215 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" event={"ID":"f8932bcf-8e7b-4302-a623-ece7abe7d2e2","Type":"ContainerStarted","Data":"a9c12e95e0ab6793f7cb0f45c8830c4c73e0967866a14dba376b3551eb8e3e26"} Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.355753 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:06 crc kubenswrapper[4979]: I0130 21:57:06.468007 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6whjn"] Jan 30 21:57:06 crc kubenswrapper[4979]: W0130 21:57:06.473864 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9bf7d77_b99e_4190_8510_dd0778767e89.slice/crio-b31309ca117732fd643c1dca8c7140476c36f5aefba6a82104af01c77ddcafdd WatchSource:0}: Error finding container b31309ca117732fd643c1dca8c7140476c36f5aefba6a82104af01c77ddcafdd: Status 404 returned error can't find the container with id b31309ca117732fd643c1dca8c7140476c36f5aefba6a82104af01c77ddcafdd Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.329459 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.338596 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6a083acc-78e0-41df-84ad-70c965c7bb5a-memberlist\") pod \"speaker-v2nkx\" (UID: \"6a083acc-78e0-41df-84ad-70c965c7bb5a\") " pod="metallb-system/speaker-v2nkx" Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.339747 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6whjn" event={"ID":"b9bf7d77-b99e-4190-8510-dd0778767e89","Type":"ContainerStarted","Data":"4454f40e164e3694958bf43824ea8f0b8c1ae2c5a7b14bd6c7da74737dcb3f04"} Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.339810 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6whjn" event={"ID":"b9bf7d77-b99e-4190-8510-dd0778767e89","Type":"ContainerStarted","Data":"155f2bbedb9491c8cbb2c4cbcf3bd397c1b1690aaf92102597172b1c122130f4"} Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.339820 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6whjn" event={"ID":"b9bf7d77-b99e-4190-8510-dd0778767e89","Type":"ContainerStarted","Data":"b31309ca117732fd643c1dca8c7140476c36f5aefba6a82104af01c77ddcafdd"} Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.342635 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.345413 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"7c2fe528c2d4e854fc39b9fd99bf4818b764166e231d2934501aa4ac7b53af45"} Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.366573 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-6whjn" podStartSLOduration=2.366548408 podStartE2EDuration="2.366548408s" podCreationTimestamp="2026-01-30 21:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:57:07.362339784 +0000 UTC m=+1023.323586817" watchObservedRunningTime="2026-01-30 21:57:07.366548408 +0000 UTC m=+1023.327795451" Jan 30 21:57:07 crc kubenswrapper[4979]: I0130 21:57:07.400157 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v2nkx" Jan 30 21:57:08 crc kubenswrapper[4979]: I0130 21:57:08.356193 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v2nkx" event={"ID":"6a083acc-78e0-41df-84ad-70c965c7bb5a","Type":"ContainerStarted","Data":"df85e0c46af00c5ce0640e6f1460561c76e351c8e13d0b77eb3734b41ea564c9"} Jan 30 21:57:08 crc kubenswrapper[4979]: I0130 21:57:08.356576 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v2nkx" event={"ID":"6a083acc-78e0-41df-84ad-70c965c7bb5a","Type":"ContainerStarted","Data":"21d07e8bbb90331a257489180141584d9ead46e30ca327b3d045f58880a95b80"} Jan 30 21:57:08 crc kubenswrapper[4979]: I0130 21:57:08.356592 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v2nkx" event={"ID":"6a083acc-78e0-41df-84ad-70c965c7bb5a","Type":"ContainerStarted","Data":"26a16657fb9c20c67806b7ac0c2a3f99cd56c1621e8fea09789ab2cc81d08760"} Jan 30 21:57:08 crc kubenswrapper[4979]: I0130 21:57:08.356786 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-v2nkx" Jan 30 21:57:08 crc kubenswrapper[4979]: I0130 21:57:08.385009 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-v2nkx" podStartSLOduration=3.384988395 podStartE2EDuration="3.384988395s" podCreationTimestamp="2026-01-30 21:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:57:08.378668513 +0000 UTC m=+1024.339915546" watchObservedRunningTime="2026-01-30 21:57:08.384988395 +0000 UTC m=+1024.346235428" Jan 30 21:57:14 crc kubenswrapper[4979]: I0130 21:57:14.412264 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" event={"ID":"f8932bcf-8e7b-4302-a623-ece7abe7d2e2","Type":"ContainerStarted","Data":"dc4d416b8a795eee7dd714f3c84a0c6fd65b1903b3994e5988437e0e150275d6"} Jan 30 21:57:14 crc kubenswrapper[4979]: I0130 21:57:14.412751 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:14 crc kubenswrapper[4979]: I0130 21:57:14.415320 4979 generic.go:334] "Generic (PLEG): container finished" podID="edde5f2f-1d96-49c5-aee3-92f1b77ac088" containerID="6f8cb9f245fa1decb42f731396b25075acde89cd3bca790a35ae98f6a89131a1" exitCode=0 Jan 30 21:57:14 crc kubenswrapper[4979]: I0130 21:57:14.415430 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerDied","Data":"6f8cb9f245fa1decb42f731396b25075acde89cd3bca790a35ae98f6a89131a1"} Jan 30 21:57:14 crc kubenswrapper[4979]: I0130 21:57:14.438103 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" podStartSLOduration=1.8268818470000001 podStartE2EDuration="9.438072025s" podCreationTimestamp="2026-01-30 21:57:05 +0000 UTC" firstStartedPulling="2026-01-30 21:57:06.020476714 +0000 UTC m=+1021.981723747" lastFinishedPulling="2026-01-30 21:57:13.631666892 +0000 UTC m=+1029.592913925" observedRunningTime="2026-01-30 21:57:14.435895466 +0000 UTC m=+1030.397142499" watchObservedRunningTime="2026-01-30 21:57:14.438072025 +0000 UTC m=+1030.399319078" Jan 30 21:57:15 crc kubenswrapper[4979]: I0130 21:57:15.427108 4979 generic.go:334] "Generic (PLEG): container finished" podID="edde5f2f-1d96-49c5-aee3-92f1b77ac088" containerID="0a519d46de9520be742b5bc7ecc3a73261f6fd3c37c0bf0192fb012534ad7751" exitCode=0 Jan 30 21:57:15 crc kubenswrapper[4979]: I0130 21:57:15.427259 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerDied","Data":"0a519d46de9520be742b5bc7ecc3a73261f6fd3c37c0bf0192fb012534ad7751"} Jan 30 21:57:16 crc kubenswrapper[4979]: I0130 21:57:16.219665 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-6whjn" Jan 30 21:57:16 crc kubenswrapper[4979]: I0130 21:57:16.443069 4979 generic.go:334] "Generic (PLEG): container finished" podID="edde5f2f-1d96-49c5-aee3-92f1b77ac088" containerID="687b230b79d478629a3b5a1a54d8209f287a8f2853fa3c15f4a243c9e5146e5f" exitCode=0 Jan 30 21:57:16 crc kubenswrapper[4979]: I0130 21:57:16.443134 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerDied","Data":"687b230b79d478629a3b5a1a54d8209f287a8f2853fa3c15f4a243c9e5146e5f"} Jan 30 21:57:17 crc kubenswrapper[4979]: I0130 21:57:17.405023 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-v2nkx" Jan 30 21:57:17 crc kubenswrapper[4979]: I0130 21:57:17.461853 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"9eaa7c9e520b1f9d7e988eb0d6cc204c1bd1c2ce26e810a56853f6570cb7cbb6"} Jan 30 21:57:17 crc kubenswrapper[4979]: I0130 21:57:17.461915 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"483a74cb65027f36edaaf272987268eb1d37fce59cf8e9e4c064e17bfcb63baf"} Jan 30 21:57:17 crc kubenswrapper[4979]: I0130 21:57:17.461928 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"a9dc121ea455deb3798183ad37879331242e35a8c7b46ceaba90319fc7923bb4"} Jan 30 21:57:17 crc kubenswrapper[4979]: I0130 21:57:17.461938 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"3a69036622a19cf0bb571911030b8012537d593e7e6982e6ea575915373afb53"} Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.476009 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"23f7fc2e859a205f6c8ed25abbbc1c10b75076b39d07c58e5042ebfcbdc3bdd9"} Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.476404 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.476418 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cnk7l" event={"ID":"edde5f2f-1d96-49c5-aee3-92f1b77ac088","Type":"ContainerStarted","Data":"8fab229313ba23f445f06eab1c249599c7ea18286b2a404794be886563be4f1f"} Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.507498 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cnk7l" podStartSLOduration=6.343467275 podStartE2EDuration="13.507472442s" podCreationTimestamp="2026-01-30 21:57:05 +0000 UTC" firstStartedPulling="2026-01-30 21:57:06.522153338 +0000 UTC m=+1022.483400371" lastFinishedPulling="2026-01-30 21:57:13.686158495 +0000 UTC m=+1029.647405538" observedRunningTime="2026-01-30 21:57:18.502756805 +0000 UTC m=+1034.464003848" watchObservedRunningTime="2026-01-30 21:57:18.507472442 +0000 UTC m=+1034.468719485" Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.897187 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc"] Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.898628 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.910019 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:57:18 crc kubenswrapper[4979]: I0130 21:57:18.940149 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc"] Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.017229 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.017281 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.017351 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr6kb\" (UniqueName: \"kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.118782 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr6kb\" (UniqueName: \"kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.118879 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.118910 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.119427 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.119710 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.141353 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr6kb\" (UniqueName: \"kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.224236 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.463205 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc"] Jan 30 21:57:19 crc kubenswrapper[4979]: I0130 21:57:19.483494 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" event={"ID":"20b0495c-9015-4cd9-9381-096926c32623","Type":"ContainerStarted","Data":"f3502a227fcac407c208d6a315f5323083b23b991e33b03812a555d3684eedbf"} Jan 30 21:57:20 crc kubenswrapper[4979]: I0130 21:57:20.492308 4979 generic.go:334] "Generic (PLEG): container finished" podID="20b0495c-9015-4cd9-9381-096926c32623" containerID="14bc5dd843028f44fba21e25f302e0081d7ede254e083e89946c5ea930a2ec7c" exitCode=0 Jan 30 21:57:20 crc kubenswrapper[4979]: I0130 21:57:20.492384 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" event={"ID":"20b0495c-9015-4cd9-9381-096926c32623","Type":"ContainerDied","Data":"14bc5dd843028f44fba21e25f302e0081d7ede254e083e89946c5ea930a2ec7c"} Jan 30 21:57:21 crc kubenswrapper[4979]: I0130 21:57:21.357263 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:21 crc kubenswrapper[4979]: I0130 21:57:21.486678 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:24 crc kubenswrapper[4979]: I0130 21:57:24.526217 4979 generic.go:334] "Generic (PLEG): container finished" podID="20b0495c-9015-4cd9-9381-096926c32623" containerID="f7a6336aa36e6067c52252cc9875022a5ce758e3ba0fc5ce20b405a98bd3f083" exitCode=0 Jan 30 21:57:24 crc kubenswrapper[4979]: I0130 21:57:24.526314 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" event={"ID":"20b0495c-9015-4cd9-9381-096926c32623","Type":"ContainerDied","Data":"f7a6336aa36e6067c52252cc9875022a5ce758e3ba0fc5ce20b405a98bd3f083"} Jan 30 21:57:25 crc kubenswrapper[4979]: I0130 21:57:25.536061 4979 generic.go:334] "Generic (PLEG): container finished" podID="20b0495c-9015-4cd9-9381-096926c32623" containerID="d50cfa0598b2c9a6a51f299020c259319cd700ac02cf742802fc0ec6d47d05b2" exitCode=0 Jan 30 21:57:25 crc kubenswrapper[4979]: I0130 21:57:25.536143 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" event={"ID":"20b0495c-9015-4cd9-9381-096926c32623","Type":"ContainerDied","Data":"d50cfa0598b2c9a6a51f299020c259319cd700ac02cf742802fc0ec6d47d05b2"} Jan 30 21:57:25 crc kubenswrapper[4979]: I0130 21:57:25.767696 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5bgxv" Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.359878 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cnk7l" Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.827079 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.956754 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr6kb\" (UniqueName: \"kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb\") pod \"20b0495c-9015-4cd9-9381-096926c32623\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.956955 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle\") pod \"20b0495c-9015-4cd9-9381-096926c32623\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.956989 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util\") pod \"20b0495c-9015-4cd9-9381-096926c32623\" (UID: \"20b0495c-9015-4cd9-9381-096926c32623\") " Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.958491 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle" (OuterVolumeSpecName: "bundle") pod "20b0495c-9015-4cd9-9381-096926c32623" (UID: "20b0495c-9015-4cd9-9381-096926c32623"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.964167 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb" (OuterVolumeSpecName: "kube-api-access-jr6kb") pod "20b0495c-9015-4cd9-9381-096926c32623" (UID: "20b0495c-9015-4cd9-9381-096926c32623"). InnerVolumeSpecName "kube-api-access-jr6kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:57:26 crc kubenswrapper[4979]: I0130 21:57:26.966969 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util" (OuterVolumeSpecName: "util") pod "20b0495c-9015-4cd9-9381-096926c32623" (UID: "20b0495c-9015-4cd9-9381-096926c32623"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.058752 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr6kb\" (UniqueName: \"kubernetes.io/projected/20b0495c-9015-4cd9-9381-096926c32623-kube-api-access-jr6kb\") on node \"crc\" DevicePath \"\"" Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.058790 4979 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.058800 4979 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b0495c-9015-4cd9-9381-096926c32623-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.551827 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" event={"ID":"20b0495c-9015-4cd9-9381-096926c32623","Type":"ContainerDied","Data":"f3502a227fcac407c208d6a315f5323083b23b991e33b03812a555d3684eedbf"} Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.551900 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3502a227fcac407c208d6a315f5323083b23b991e33b03812a555d3684eedbf" Jan 30 21:57:27 crc kubenswrapper[4979]: I0130 21:57:27.551917 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.148843 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd"] Jan 30 21:57:32 crc kubenswrapper[4979]: E0130 21:57:32.150056 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="extract" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.150074 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="extract" Jan 30 21:57:32 crc kubenswrapper[4979]: E0130 21:57:32.150118 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="util" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.150127 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="util" Jan 30 21:57:32 crc kubenswrapper[4979]: E0130 21:57:32.150138 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="pull" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.150147 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="pull" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.150299 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b0495c-9015-4cd9-9381-096926c32623" containerName="extract" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.151006 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.161211 4979 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8vxj7" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.161293 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.168858 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.183748 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd"] Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.238920 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0330af3-c305-40ae-b65b-dbf13ed2c345-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.239015 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6zz4\" (UniqueName: \"kubernetes.io/projected/b0330af3-c305-40ae-b65b-dbf13ed2c345-kube-api-access-k6zz4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.340524 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0330af3-c305-40ae-b65b-dbf13ed2c345-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.340620 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6zz4\" (UniqueName: \"kubernetes.io/projected/b0330af3-c305-40ae-b65b-dbf13ed2c345-kube-api-access-k6zz4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.341701 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0330af3-c305-40ae-b65b-dbf13ed2c345-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.365273 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6zz4\" (UniqueName: \"kubernetes.io/projected/b0330af3-c305-40ae-b65b-dbf13ed2c345-kube-api-access-k6zz4\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9n9vd\" (UID: \"b0330af3-c305-40ae-b65b-dbf13ed2c345\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.474332 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" Jan 30 21:57:32 crc kubenswrapper[4979]: I0130 21:57:32.761468 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd"] Jan 30 21:57:33 crc kubenswrapper[4979]: I0130 21:57:33.597385 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" event={"ID":"b0330af3-c305-40ae-b65b-dbf13ed2c345","Type":"ContainerStarted","Data":"4048f6a6adbfa55d9327c1211a08a7e3e97b3814558fd5629016a32f12d0b1e8"} Jan 30 21:57:36 crc kubenswrapper[4979]: I0130 21:57:36.621189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" event={"ID":"b0330af3-c305-40ae-b65b-dbf13ed2c345","Type":"ContainerStarted","Data":"7b9e7e27ecd2927fc503093b88639d1f6677232f2406ebc93fa4cc6468bf61ca"} Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.025311 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9n9vd" podStartSLOduration=5.65357256 podStartE2EDuration="9.025289744s" podCreationTimestamp="2026-01-30 21:57:32 +0000 UTC" firstStartedPulling="2026-01-30 21:57:32.837452863 +0000 UTC m=+1048.798699896" lastFinishedPulling="2026-01-30 21:57:36.209170047 +0000 UTC m=+1052.170417080" observedRunningTime="2026-01-30 21:57:36.650942941 +0000 UTC m=+1052.612189994" watchObservedRunningTime="2026-01-30 21:57:41.025289744 +0000 UTC m=+1056.986536777" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.031435 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pw6nw"] Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.032297 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.034702 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.034754 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.034825 4979 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-t5chk" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.045292 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pw6nw"] Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.102899 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.103018 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flzxb\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-kube-api-access-flzxb\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.204732 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.204826 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flzxb\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-kube-api-access-flzxb\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.229541 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.229604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flzxb\" (UniqueName: \"kubernetes.io/projected/7670008a-1d21-4255-8148-e85ac90a90d4-kube-api-access-flzxb\") pod \"cert-manager-webhook-6888856db4-pw6nw\" (UID: \"7670008a-1d21-4255-8148-e85ac90a90d4\") " pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.348349 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.587660 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pw6nw"] Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.659830 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" event={"ID":"7670008a-1d21-4255-8148-e85ac90a90d4","Type":"ContainerStarted","Data":"09af48dcacf329d9668f08a5ac87afb59194674753e83ccf1e1557c838f5bdbb"} Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.987964 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-x57ft"] Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.989050 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.991175 4979 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-w9v6x" Jan 30 21:57:41 crc kubenswrapper[4979]: I0130 21:57:41.997463 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-x57ft"] Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.137183 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.137267 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dnfc\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-kube-api-access-8dnfc\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.238298 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.238405 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dnfc\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-kube-api-access-8dnfc\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.258656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.260752 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dnfc\" (UniqueName: \"kubernetes.io/projected/34da3314-5047-419b-8c7b-927cc2f00d8c-kube-api-access-8dnfc\") pod \"cert-manager-cainjector-5545bd876-x57ft\" (UID: \"34da3314-5047-419b-8c7b-927cc2f00d8c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.359318 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.630754 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-x57ft"] Jan 30 21:57:42 crc kubenswrapper[4979]: I0130 21:57:42.665706 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" event={"ID":"34da3314-5047-419b-8c7b-927cc2f00d8c","Type":"ContainerStarted","Data":"4716f37d88c507d4f77143a58754d3e30915b797a975dd36d692d65c23fd9278"} Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.358556 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-f88tb"] Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.361659 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.370719 4979 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jlp55" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.374311 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-f88tb"] Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.519485 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-bound-sa-token\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.519563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfl9f\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-kube-api-access-vfl9f\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.621396 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-bound-sa-token\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.621584 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfl9f\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-kube-api-access-vfl9f\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.645513 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfl9f\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-kube-api-access-vfl9f\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.653013 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99fcd41b-c557-4bf0-abbb-b189f4aaaf41-bound-sa-token\") pod \"cert-manager-545d4d4674-f88tb\" (UID: \"99fcd41b-c557-4bf0-abbb-b189f4aaaf41\") " pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.688463 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-f88tb" Jan 30 21:57:51 crc kubenswrapper[4979]: I0130 21:57:51.924712 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-f88tb"] Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.766370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" event={"ID":"34da3314-5047-419b-8c7b-927cc2f00d8c","Type":"ContainerStarted","Data":"668eba0bc9f62d1b85e23f7dc45ff71e36ed3431cfcd0b7c346a67f8d3f54af9"} Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.767905 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" event={"ID":"7670008a-1d21-4255-8148-e85ac90a90d4","Type":"ContainerStarted","Data":"48dda56076e06d540b8a6445ac3ae4a7f3e4500ff91bdb0119d81dae9c34a8bc"} Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.768057 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.769510 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-f88tb" event={"ID":"99fcd41b-c557-4bf0-abbb-b189f4aaaf41","Type":"ContainerStarted","Data":"a687ef5acc78c48e132549afa432dcadfce09aa417c3e91cb10c78cbeb9cb261"} Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.769566 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-f88tb" event={"ID":"99fcd41b-c557-4bf0-abbb-b189f4aaaf41","Type":"ContainerStarted","Data":"aeb94cfdef36afec93b13f04e7596a13af2da580ba44e80e72488fcd4f79c1f7"} Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.792683 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-x57ft" podStartSLOduration=2.56229683 podStartE2EDuration="15.792651748s" podCreationTimestamp="2026-01-30 21:57:41 +0000 UTC" firstStartedPulling="2026-01-30 21:57:42.639612712 +0000 UTC m=+1058.600859745" lastFinishedPulling="2026-01-30 21:57:55.86996763 +0000 UTC m=+1071.831214663" observedRunningTime="2026-01-30 21:57:56.78571636 +0000 UTC m=+1072.746963413" watchObservedRunningTime="2026-01-30 21:57:56.792651748 +0000 UTC m=+1072.753898801" Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.816712 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" podStartSLOduration=1.569486776 podStartE2EDuration="15.816685067s" podCreationTimestamp="2026-01-30 21:57:41 +0000 UTC" firstStartedPulling="2026-01-30 21:57:41.604909476 +0000 UTC m=+1057.566156499" lastFinishedPulling="2026-01-30 21:57:55.852107757 +0000 UTC m=+1071.813354790" observedRunningTime="2026-01-30 21:57:56.807766147 +0000 UTC m=+1072.769013210" watchObservedRunningTime="2026-01-30 21:57:56.816685067 +0000 UTC m=+1072.777932100" Jan 30 21:57:56 crc kubenswrapper[4979]: I0130 21:57:56.839495 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-f88tb" podStartSLOduration=5.839468973 podStartE2EDuration="5.839468973s" podCreationTimestamp="2026-01-30 21:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:57:56.834023296 +0000 UTC m=+1072.795270329" watchObservedRunningTime="2026-01-30 21:57:56.839468973 +0000 UTC m=+1072.800716006" Jan 30 21:58:01 crc kubenswrapper[4979]: I0130 21:58:01.351513 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-pw6nw" Jan 30 21:58:02 crc kubenswrapper[4979]: I0130 21:58:02.040002 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:58:02 crc kubenswrapper[4979]: I0130 21:58:02.040157 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.208169 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.209331 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.213230 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.213230 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.213230 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-b8s5h" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.239772 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.320275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64fg\" (UniqueName: \"kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg\") pod \"openstack-operator-index-xlffw\" (UID: \"f3c416ee-b90c-4c0f-b679-b10f3468c224\") " pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.422185 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r64fg\" (UniqueName: \"kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg\") pod \"openstack-operator-index-xlffw\" (UID: \"f3c416ee-b90c-4c0f-b679-b10f3468c224\") " pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.444098 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r64fg\" (UniqueName: \"kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg\") pod \"openstack-operator-index-xlffw\" (UID: \"f3c416ee-b90c-4c0f-b679-b10f3468c224\") " pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.541758 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:04 crc kubenswrapper[4979]: I0130 21:58:04.974369 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:05 crc kubenswrapper[4979]: I0130 21:58:05.836431 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xlffw" event={"ID":"f3c416ee-b90c-4c0f-b679-b10f3468c224","Type":"ContainerStarted","Data":"0c35f707623f9b4df2cd5ad136ddab4f99c10bc8eb3cd2fa22c7087fbcb0d077"} Jan 30 21:58:07 crc kubenswrapper[4979]: I0130 21:58:07.575496 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:07 crc kubenswrapper[4979]: I0130 21:58:07.850370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xlffw" event={"ID":"f3c416ee-b90c-4c0f-b679-b10f3468c224","Type":"ContainerStarted","Data":"a50f9e6433cf9638a04ada4baca6ab884d117e18c7828dc294a24588ba281dc2"} Jan 30 21:58:07 crc kubenswrapper[4979]: I0130 21:58:07.868309 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xlffw" podStartSLOduration=1.625783265 podStartE2EDuration="3.868289817s" podCreationTimestamp="2026-01-30 21:58:04 +0000 UTC" firstStartedPulling="2026-01-30 21:58:04.986207222 +0000 UTC m=+1080.947454255" lastFinishedPulling="2026-01-30 21:58:07.228713754 +0000 UTC m=+1083.189960807" observedRunningTime="2026-01-30 21:58:07.865483961 +0000 UTC m=+1083.826730994" watchObservedRunningTime="2026-01-30 21:58:07.868289817 +0000 UTC m=+1083.829536840" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.176563 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jl5wf"] Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.177543 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.193388 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jl5wf"] Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.289372 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bvsh\" (UniqueName: \"kubernetes.io/projected/bb59579b-3a3c-4ae9-b3fe-d4231a17e050-kube-api-access-4bvsh\") pod \"openstack-operator-index-jl5wf\" (UID: \"bb59579b-3a3c-4ae9-b3fe-d4231a17e050\") " pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.391124 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bvsh\" (UniqueName: \"kubernetes.io/projected/bb59579b-3a3c-4ae9-b3fe-d4231a17e050-kube-api-access-4bvsh\") pod \"openstack-operator-index-jl5wf\" (UID: \"bb59579b-3a3c-4ae9-b3fe-d4231a17e050\") " pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.412989 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bvsh\" (UniqueName: \"kubernetes.io/projected/bb59579b-3a3c-4ae9-b3fe-d4231a17e050-kube-api-access-4bvsh\") pod \"openstack-operator-index-jl5wf\" (UID: \"bb59579b-3a3c-4ae9-b3fe-d4231a17e050\") " pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.507001 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.856361 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-xlffw" podUID="f3c416ee-b90c-4c0f-b679-b10f3468c224" containerName="registry-server" containerID="cri-o://a50f9e6433cf9638a04ada4baca6ab884d117e18c7828dc294a24588ba281dc2" gracePeriod=2 Jan 30 21:58:08 crc kubenswrapper[4979]: I0130 21:58:08.999830 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jl5wf"] Jan 30 21:58:09 crc kubenswrapper[4979]: W0130 21:58:09.008804 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb59579b_3a3c_4ae9_b3fe_d4231a17e050.slice/crio-88e2e4f50f11cca75c5399f1ddc2dbcd0543721c0362f98944b81d8738112c9c WatchSource:0}: Error finding container 88e2e4f50f11cca75c5399f1ddc2dbcd0543721c0362f98944b81d8738112c9c: Status 404 returned error can't find the container with id 88e2e4f50f11cca75c5399f1ddc2dbcd0543721c0362f98944b81d8738112c9c Jan 30 21:58:09 crc kubenswrapper[4979]: I0130 21:58:09.865416 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl5wf" event={"ID":"bb59579b-3a3c-4ae9-b3fe-d4231a17e050","Type":"ContainerStarted","Data":"88e2e4f50f11cca75c5399f1ddc2dbcd0543721c0362f98944b81d8738112c9c"} Jan 30 21:58:10 crc kubenswrapper[4979]: I0130 21:58:10.873583 4979 generic.go:334] "Generic (PLEG): container finished" podID="f3c416ee-b90c-4c0f-b679-b10f3468c224" containerID="a50f9e6433cf9638a04ada4baca6ab884d117e18c7828dc294a24588ba281dc2" exitCode=0 Jan 30 21:58:10 crc kubenswrapper[4979]: I0130 21:58:10.873671 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xlffw" event={"ID":"f3c416ee-b90c-4c0f-b679-b10f3468c224","Type":"ContainerDied","Data":"a50f9e6433cf9638a04ada4baca6ab884d117e18c7828dc294a24588ba281dc2"} Jan 30 21:58:10 crc kubenswrapper[4979]: I0130 21:58:10.988179 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.141193 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r64fg\" (UniqueName: \"kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg\") pod \"f3c416ee-b90c-4c0f-b679-b10f3468c224\" (UID: \"f3c416ee-b90c-4c0f-b679-b10f3468c224\") " Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.149025 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg" (OuterVolumeSpecName: "kube-api-access-r64fg") pod "f3c416ee-b90c-4c0f-b679-b10f3468c224" (UID: "f3c416ee-b90c-4c0f-b679-b10f3468c224"). InnerVolumeSpecName "kube-api-access-r64fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.242999 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r64fg\" (UniqueName: \"kubernetes.io/projected/f3c416ee-b90c-4c0f-b679-b10f3468c224-kube-api-access-r64fg\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.883631 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jl5wf" event={"ID":"bb59579b-3a3c-4ae9-b3fe-d4231a17e050","Type":"ContainerStarted","Data":"f53432ae5b86757feaf6b7f8344f90cf11c0b080240b539ea750b263490f1563"} Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.886301 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xlffw" event={"ID":"f3c416ee-b90c-4c0f-b679-b10f3468c224","Type":"ContainerDied","Data":"0c35f707623f9b4df2cd5ad136ddab4f99c10bc8eb3cd2fa22c7087fbcb0d077"} Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.886377 4979 scope.go:117] "RemoveContainer" containerID="a50f9e6433cf9638a04ada4baca6ab884d117e18c7828dc294a24588ba281dc2" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.886619 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xlffw" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.919907 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jl5wf" podStartSLOduration=1.8940516490000001 podStartE2EDuration="3.919863012s" podCreationTimestamp="2026-01-30 21:58:08 +0000 UTC" firstStartedPulling="2026-01-30 21:58:09.013690526 +0000 UTC m=+1084.974937559" lastFinishedPulling="2026-01-30 21:58:11.039501899 +0000 UTC m=+1087.000748922" observedRunningTime="2026-01-30 21:58:11.903281244 +0000 UTC m=+1087.864528277" watchObservedRunningTime="2026-01-30 21:58:11.919863012 +0000 UTC m=+1087.881110045" Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.930412 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:11 crc kubenswrapper[4979]: I0130 21:58:11.934458 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-xlffw"] Jan 30 21:58:13 crc kubenswrapper[4979]: I0130 21:58:13.081210 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c416ee-b90c-4c0f-b679-b10f3468c224" path="/var/lib/kubelet/pods/f3c416ee-b90c-4c0f-b679-b10f3468c224/volumes" Jan 30 21:58:18 crc kubenswrapper[4979]: I0130 21:58:18.507523 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:18 crc kubenswrapper[4979]: I0130 21:58:18.508280 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:18 crc kubenswrapper[4979]: I0130 21:58:18.553340 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:18 crc kubenswrapper[4979]: I0130 21:58:18.985498 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-jl5wf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.032955 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf"] Jan 30 21:58:20 crc kubenswrapper[4979]: E0130 21:58:20.033513 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c416ee-b90c-4c0f-b679-b10f3468c224" containerName="registry-server" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.033541 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c416ee-b90c-4c0f-b679-b10f3468c224" containerName="registry-server" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.033772 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c416ee-b90c-4c0f-b679-b10f3468c224" containerName="registry-server" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.035597 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.038309 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wbt8z" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.041172 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf"] Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.086851 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwhz\" (UniqueName: \"kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.086903 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.087278 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.188757 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.188864 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwhz\" (UniqueName: \"kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.188894 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.189362 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.189506 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.213885 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwhz\" (UniqueName: \"kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz\") pod \"fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.363926 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.628003 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf"] Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.965239 4979 generic.go:334] "Generic (PLEG): container finished" podID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerID="7570d588bbcf72632b6e3c445a99405e91f58f126b88077afde432d1fcac2dfe" exitCode=0 Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.965302 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" event={"ID":"b788bb72-addf-4df0-9fa8-e27fb8e1e10a","Type":"ContainerDied","Data":"7570d588bbcf72632b6e3c445a99405e91f58f126b88077afde432d1fcac2dfe"} Jan 30 21:58:20 crc kubenswrapper[4979]: I0130 21:58:20.965340 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" event={"ID":"b788bb72-addf-4df0-9fa8-e27fb8e1e10a","Type":"ContainerStarted","Data":"992f2cb14c18dcf574f093a0a0067c3fbdbc7d307e0de5a7ae550e21f4f53948"} Jan 30 21:58:21 crc kubenswrapper[4979]: I0130 21:58:21.973339 4979 generic.go:334] "Generic (PLEG): container finished" podID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerID="3c0861ecad82b499f7f4e3c57750243ee6aea3aed93a08fcb20be4fc5a75d352" exitCode=0 Jan 30 21:58:21 crc kubenswrapper[4979]: I0130 21:58:21.973390 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" event={"ID":"b788bb72-addf-4df0-9fa8-e27fb8e1e10a","Type":"ContainerDied","Data":"3c0861ecad82b499f7f4e3c57750243ee6aea3aed93a08fcb20be4fc5a75d352"} Jan 30 21:58:22 crc kubenswrapper[4979]: I0130 21:58:22.984152 4979 generic.go:334] "Generic (PLEG): container finished" podID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerID="01f3ec01eefaa6ffc24c194ebfba2d370bd7e52c428aca05c86c41cce7d455d7" exitCode=0 Jan 30 21:58:22 crc kubenswrapper[4979]: I0130 21:58:22.984265 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" event={"ID":"b788bb72-addf-4df0-9fa8-e27fb8e1e10a","Type":"ContainerDied","Data":"01f3ec01eefaa6ffc24c194ebfba2d370bd7e52c428aca05c86c41cce7d455d7"} Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.221433 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.354948 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util\") pod \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.355664 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle\") pod \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.355711 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gwhz\" (UniqueName: \"kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz\") pod \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\" (UID: \"b788bb72-addf-4df0-9fa8-e27fb8e1e10a\") " Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.356955 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle" (OuterVolumeSpecName: "bundle") pod "b788bb72-addf-4df0-9fa8-e27fb8e1e10a" (UID: "b788bb72-addf-4df0-9fa8-e27fb8e1e10a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.367154 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz" (OuterVolumeSpecName: "kube-api-access-7gwhz") pod "b788bb72-addf-4df0-9fa8-e27fb8e1e10a" (UID: "b788bb72-addf-4df0-9fa8-e27fb8e1e10a"). InnerVolumeSpecName "kube-api-access-7gwhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.371407 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util" (OuterVolumeSpecName: "util") pod "b788bb72-addf-4df0-9fa8-e27fb8e1e10a" (UID: "b788bb72-addf-4df0-9fa8-e27fb8e1e10a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.458483 4979 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.458547 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gwhz\" (UniqueName: \"kubernetes.io/projected/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-kube-api-access-7gwhz\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:24 crc kubenswrapper[4979]: I0130 21:58:24.458571 4979 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b788bb72-addf-4df0-9fa8-e27fb8e1e10a-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:58:25 crc kubenswrapper[4979]: I0130 21:58:25.007230 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" event={"ID":"b788bb72-addf-4df0-9fa8-e27fb8e1e10a","Type":"ContainerDied","Data":"992f2cb14c18dcf574f093a0a0067c3fbdbc7d307e0de5a7ae550e21f4f53948"} Jan 30 21:58:25 crc kubenswrapper[4979]: I0130 21:58:25.007278 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf" Jan 30 21:58:25 crc kubenswrapper[4979]: I0130 21:58:25.007296 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992f2cb14c18dcf574f093a0a0067c3fbdbc7d307e0de5a7ae550e21f4f53948" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.419595 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw"] Jan 30 21:58:27 crc kubenswrapper[4979]: E0130 21:58:27.420440 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="extract" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.420458 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="extract" Jan 30 21:58:27 crc kubenswrapper[4979]: E0130 21:58:27.420479 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="util" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.420488 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="util" Jan 30 21:58:27 crc kubenswrapper[4979]: E0130 21:58:27.420509 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="pull" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.420518 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="pull" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.420678 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b788bb72-addf-4df0-9fa8-e27fb8e1e10a" containerName="extract" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.421304 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.429680 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-62pb8" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.458064 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw"] Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.510129 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcfl\" (UniqueName: \"kubernetes.io/projected/9a874b50-c515-45d3-8562-05532a2c5adc-kube-api-access-mmcfl\") pod \"openstack-operator-controller-init-7c7d885c49-dmwtw\" (UID: \"9a874b50-c515-45d3-8562-05532a2c5adc\") " pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.611421 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcfl\" (UniqueName: \"kubernetes.io/projected/9a874b50-c515-45d3-8562-05532a2c5adc-kube-api-access-mmcfl\") pod \"openstack-operator-controller-init-7c7d885c49-dmwtw\" (UID: \"9a874b50-c515-45d3-8562-05532a2c5adc\") " pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.635048 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcfl\" (UniqueName: \"kubernetes.io/projected/9a874b50-c515-45d3-8562-05532a2c5adc-kube-api-access-mmcfl\") pod \"openstack-operator-controller-init-7c7d885c49-dmwtw\" (UID: \"9a874b50-c515-45d3-8562-05532a2c5adc\") " pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:27 crc kubenswrapper[4979]: I0130 21:58:27.746124 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:28 crc kubenswrapper[4979]: I0130 21:58:28.048264 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw"] Jan 30 21:58:29 crc kubenswrapper[4979]: I0130 21:58:29.041958 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" event={"ID":"9a874b50-c515-45d3-8562-05532a2c5adc","Type":"ContainerStarted","Data":"5a1ce447f0756d7869a65fa28287907b1677da6df0bd8352bedb7b7acf9acb51"} Jan 30 21:58:32 crc kubenswrapper[4979]: I0130 21:58:32.039638 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:58:32 crc kubenswrapper[4979]: I0130 21:58:32.040043 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:58:33 crc kubenswrapper[4979]: I0130 21:58:33.081749 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" event={"ID":"9a874b50-c515-45d3-8562-05532a2c5adc","Type":"ContainerStarted","Data":"e0e32a9adce0eeecd646e8c3b8b6d62c5327b78877f3a3031f9c800cf98bc14c"} Jan 30 21:58:33 crc kubenswrapper[4979]: I0130 21:58:33.081958 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:33 crc kubenswrapper[4979]: I0130 21:58:33.133856 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" podStartSLOduration=2.244601882 podStartE2EDuration="6.133832328s" podCreationTimestamp="2026-01-30 21:58:27 +0000 UTC" firstStartedPulling="2026-01-30 21:58:28.065890023 +0000 UTC m=+1104.027137056" lastFinishedPulling="2026-01-30 21:58:31.955120469 +0000 UTC m=+1107.916367502" observedRunningTime="2026-01-30 21:58:33.127426136 +0000 UTC m=+1109.088673169" watchObservedRunningTime="2026-01-30 21:58:33.133832328 +0000 UTC m=+1109.095079371" Jan 30 21:58:37 crc kubenswrapper[4979]: I0130 21:58:37.750247 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7c7d885c49-dmwtw" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.173619 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.175636 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.181350 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-t26h5" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.189247 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.190323 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.192438 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zntj5" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.200718 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.202108 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.229596 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-n7vkg" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.249458 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbjpg\" (UniqueName: \"kubernetes.io/projected/9134e6d2-b638-49be-9612-be12250e0a6d-kube-api-access-qbjpg\") pod \"designate-operator-controller-manager-8f4c5cb64-5k7wd\" (UID: \"9134e6d2-b638-49be-9612-be12250e0a6d\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.249542 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbhm6\" (UniqueName: \"kubernetes.io/projected/dcd08638-857d-40cd-a92c-b6dcef0bc329-kube-api-access-xbhm6\") pod \"barbican-operator-controller-manager-fc589b45f-r2mb8\" (UID: \"dcd08638-857d-40cd-a92c-b6dcef0bc329\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.249575 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvv96\" (UniqueName: \"kubernetes.io/projected/11771b88-abd2-436e-a95c-5113a5bae88b-kube-api-access-dvv96\") pod \"cinder-operator-controller-manager-787499fbb-p95sz\" (UID: \"11771b88-abd2-436e-a95c-5113a5bae88b\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.251124 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.275060 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.350751 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbjpg\" (UniqueName: \"kubernetes.io/projected/9134e6d2-b638-49be-9612-be12250e0a6d-kube-api-access-qbjpg\") pod \"designate-operator-controller-manager-8f4c5cb64-5k7wd\" (UID: \"9134e6d2-b638-49be-9612-be12250e0a6d\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.350811 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbhm6\" (UniqueName: \"kubernetes.io/projected/dcd08638-857d-40cd-a92c-b6dcef0bc329-kube-api-access-xbhm6\") pod \"barbican-operator-controller-manager-fc589b45f-r2mb8\" (UID: \"dcd08638-857d-40cd-a92c-b6dcef0bc329\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.350833 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvv96\" (UniqueName: \"kubernetes.io/projected/11771b88-abd2-436e-a95c-5113a5bae88b-kube-api-access-dvv96\") pod \"cinder-operator-controller-manager-787499fbb-p95sz\" (UID: \"11771b88-abd2-436e-a95c-5113a5bae88b\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.364596 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.393180 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.394429 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.405113 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.416385 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tpqqj" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.423144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvv96\" (UniqueName: \"kubernetes.io/projected/11771b88-abd2-436e-a95c-5113a5bae88b-kube-api-access-dvv96\") pod \"cinder-operator-controller-manager-787499fbb-p95sz\" (UID: \"11771b88-abd2-436e-a95c-5113a5bae88b\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.425738 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbjpg\" (UniqueName: \"kubernetes.io/projected/9134e6d2-b638-49be-9612-be12250e0a6d-kube-api-access-qbjpg\") pod \"designate-operator-controller-manager-8f4c5cb64-5k7wd\" (UID: \"9134e6d2-b638-49be-9612-be12250e0a6d\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.451396 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrbfx\" (UniqueName: \"kubernetes.io/projected/8893a935-e9c7-4d38-ae0c-17a94445475f-kube-api-access-qrbfx\") pod \"glance-operator-controller-manager-6bfc9d4d48-zqjfh\" (UID: \"8893a935-e9c7-4d38-ae0c-17a94445475f\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.457795 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbhm6\" (UniqueName: \"kubernetes.io/projected/dcd08638-857d-40cd-a92c-b6dcef0bc329-kube-api-access-xbhm6\") pod \"barbican-operator-controller-manager-fc589b45f-r2mb8\" (UID: \"dcd08638-857d-40cd-a92c-b6dcef0bc329\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.461306 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.462308 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.466494 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-p27f6" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.492730 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-9q469"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.493799 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.495482 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.501004 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.501771 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.512489 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.513983 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.514244 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-cwwdt" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.514405 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x2mf7" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.533203 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.553840 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrbfx\" (UniqueName: \"kubernetes.io/projected/8893a935-e9c7-4d38-ae0c-17a94445475f-kube-api-access-qrbfx\") pod \"glance-operator-controller-manager-6bfc9d4d48-zqjfh\" (UID: \"8893a935-e9c7-4d38-ae0c-17a94445475f\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.583111 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.614640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrbfx\" (UniqueName: \"kubernetes.io/projected/8893a935-e9c7-4d38-ae0c-17a94445475f-kube-api-access-qrbfx\") pod \"glance-operator-controller-manager-6bfc9d4d48-zqjfh\" (UID: \"8893a935-e9c7-4d38-ae0c-17a94445475f\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.622265 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.657126 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xkvj\" (UniqueName: \"kubernetes.io/projected/5966d922-4db9-40f7-baf1-5624f1a033d6-kube-api-access-2xkvj\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.657207 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcvw\" (UniqueName: \"kubernetes.io/projected/07393de3-4dbb-4de1-a7fc-49785a623de2-kube-api-access-mlcvw\") pod \"horizon-operator-controller-manager-5fb775575f-5pmpx\" (UID: \"07393de3-4dbb-4de1-a7fc-49785a623de2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.657260 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.657290 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvhs\" (UniqueName: \"kubernetes.io/projected/0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc-kube-api-access-7dvhs\") pod \"heat-operator-controller-manager-65dc6c8d9c-h59f2\" (UID: \"0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.674151 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.675454 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.685491 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-l6x25" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.714354 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-9q469"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.726690 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.762396 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7flt\" (UniqueName: \"kubernetes.io/projected/9c8cf87b-4069-497d-9fcc-3b7be476ed4d-kube-api-access-c7flt\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-lrqnv\" (UID: \"9c8cf87b-4069-497d-9fcc-3b7be476ed4d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.762455 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xkvj\" (UniqueName: \"kubernetes.io/projected/5966d922-4db9-40f7-baf1-5624f1a033d6-kube-api-access-2xkvj\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.762495 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlcvw\" (UniqueName: \"kubernetes.io/projected/07393de3-4dbb-4de1-a7fc-49785a623de2-kube-api-access-mlcvw\") pod \"horizon-operator-controller-manager-5fb775575f-5pmpx\" (UID: \"07393de3-4dbb-4de1-a7fc-49785a623de2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.762533 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.762553 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dvhs\" (UniqueName: \"kubernetes.io/projected/0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc-kube-api-access-7dvhs\") pod \"heat-operator-controller-manager-65dc6c8d9c-h59f2\" (UID: \"0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:58:55 crc kubenswrapper[4979]: E0130 21:58:55.763289 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:55 crc kubenswrapper[4979]: E0130 21:58:55.763357 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:56.263330939 +0000 UTC m=+1132.224577972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.791109 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.792085 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.809245 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.825404 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.832136 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.833423 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.836044 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.837220 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9t9d5" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.837302 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-qk6v7" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.850848 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7vlqh" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.865178 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6z9\" (UniqueName: \"kubernetes.io/projected/777d41f5-6e7f-4099-9f6f-aceaf0b972da-kube-api-access-8h6z9\") pod \"mariadb-operator-controller-manager-67bf948998-6bb56\" (UID: \"777d41f5-6e7f-4099-9f6f-aceaf0b972da\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.865269 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2g8p\" (UniqueName: \"kubernetes.io/projected/39f45c61-20b7-4d98-98af-526018a240c1-kube-api-access-c2g8p\") pod \"keystone-operator-controller-manager-64469b487f-g6pnt\" (UID: \"39f45c61-20b7-4d98-98af-526018a240c1\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.865306 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8x6q\" (UniqueName: \"kubernetes.io/projected/7f396cc2-4739-4401-9319-36881d4f449d-kube-api-access-l8x6q\") pod \"manila-operator-controller-manager-7d96d95959-5s8xm\" (UID: \"7f396cc2-4739-4401-9319-36881d4f449d\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.865350 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7flt\" (UniqueName: \"kubernetes.io/projected/9c8cf87b-4069-497d-9fcc-3b7be476ed4d-kube-api-access-c7flt\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-lrqnv\" (UID: \"9c8cf87b-4069-497d-9fcc-3b7be476ed4d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.870649 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.872650 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dvhs\" (UniqueName: \"kubernetes.io/projected/0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc-kube-api-access-7dvhs\") pod \"heat-operator-controller-manager-65dc6c8d9c-h59f2\" (UID: \"0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.886687 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlcvw\" (UniqueName: \"kubernetes.io/projected/07393de3-4dbb-4de1-a7fc-49785a623de2-kube-api-access-mlcvw\") pod \"horizon-operator-controller-manager-5fb775575f-5pmpx\" (UID: \"07393de3-4dbb-4de1-a7fc-49785a623de2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.886855 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xkvj\" (UniqueName: \"kubernetes.io/projected/5966d922-4db9-40f7-baf1-5624f1a033d6-kube-api-access-2xkvj\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.916479 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.941178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.941423 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7flt\" (UniqueName: \"kubernetes.io/projected/9c8cf87b-4069-497d-9fcc-3b7be476ed4d-kube-api-access-c7flt\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-lrqnv\" (UID: \"9c8cf87b-4069-497d-9fcc-3b7be476ed4d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.970364 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56"] Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.971112 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6z9\" (UniqueName: \"kubernetes.io/projected/777d41f5-6e7f-4099-9f6f-aceaf0b972da-kube-api-access-8h6z9\") pod \"mariadb-operator-controller-manager-67bf948998-6bb56\" (UID: \"777d41f5-6e7f-4099-9f6f-aceaf0b972da\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.971167 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2g8p\" (UniqueName: \"kubernetes.io/projected/39f45c61-20b7-4d98-98af-526018a240c1-kube-api-access-c2g8p\") pod \"keystone-operator-controller-manager-64469b487f-g6pnt\" (UID: \"39f45c61-20b7-4d98-98af-526018a240c1\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.971199 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8x6q\" (UniqueName: \"kubernetes.io/projected/7f396cc2-4739-4401-9319-36881d4f449d-kube-api-access-l8x6q\") pod \"manila-operator-controller-manager-7d96d95959-5s8xm\" (UID: \"7f396cc2-4739-4401-9319-36881d4f449d\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:58:55 crc kubenswrapper[4979]: I0130 21:58:55.987136 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.023345 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-v774d"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.024357 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.025193 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.036019 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-ltj68" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.047120 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-v774d"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.074334 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq8kz\" (UniqueName: \"kubernetes.io/projected/31481495-f181-449a-887e-ed58bf88c783-kube-api-access-sq8kz\") pod \"neutron-operator-controller-manager-576995988b-v774d\" (UID: \"31481495-f181-449a-887e-ed58bf88c783\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.077171 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.088200 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.102906 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.110592 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.118986 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.121612 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dnp56" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.133776 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vvt9t" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.137502 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6z9\" (UniqueName: \"kubernetes.io/projected/777d41f5-6e7f-4099-9f6f-aceaf0b972da-kube-api-access-8h6z9\") pod \"mariadb-operator-controller-manager-67bf948998-6bb56\" (UID: \"777d41f5-6e7f-4099-9f6f-aceaf0b972da\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.137564 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.138835 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8x6q\" (UniqueName: \"kubernetes.io/projected/7f396cc2-4739-4401-9319-36881d4f449d-kube-api-access-l8x6q\") pod \"manila-operator-controller-manager-7d96d95959-5s8xm\" (UID: \"7f396cc2-4739-4401-9319-36881d4f449d\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.166772 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2g8p\" (UniqueName: \"kubernetes.io/projected/39f45c61-20b7-4d98-98af-526018a240c1-kube-api-access-c2g8p\") pod \"keystone-operator-controller-manager-64469b487f-g6pnt\" (UID: \"39f45c61-20b7-4d98-98af-526018a240c1\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.185474 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq8kz\" (UniqueName: \"kubernetes.io/projected/31481495-f181-449a-887e-ed58bf88c783-kube-api-access-sq8kz\") pod \"neutron-operator-controller-manager-576995988b-v774d\" (UID: \"31481495-f181-449a-887e-ed58bf88c783\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.259299 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq8kz\" (UniqueName: \"kubernetes.io/projected/31481495-f181-449a-887e-ed58bf88c783-kube-api-access-sq8kz\") pod \"neutron-operator-controller-manager-576995988b-v774d\" (UID: \"31481495-f181-449a-887e-ed58bf88c783\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.271582 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.282511 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.289696 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.289794 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs7tx\" (UniqueName: \"kubernetes.io/projected/1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7-kube-api-access-vs7tx\") pod \"nova-operator-controller-manager-5644b66645-lz8dw\" (UID: \"1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.289821 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq5km\" (UniqueName: \"kubernetes.io/projected/73527aaf-5de3-4a3e-aa4c-f2ac98e5be11-kube-api-access-vq5km\") pod \"octavia-operator-controller-manager-694c6dcf95-58s6k\" (UID: \"73527aaf-5de3-4a3e-aa4c-f2ac98e5be11\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.292342 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.292410 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:57.292389023 +0000 UTC m=+1133.253636056 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.292532 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.297620 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.330125 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-bflpz" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.360317 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.400567 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.416000 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.443199 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs7tx\" (UniqueName: \"kubernetes.io/projected/1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7-kube-api-access-vs7tx\") pod \"nova-operator-controller-manager-5644b66645-lz8dw\" (UID: \"1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.443285 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq5km\" (UniqueName: \"kubernetes.io/projected/73527aaf-5de3-4a3e-aa4c-f2ac98e5be11-kube-api-access-vq5km\") pod \"octavia-operator-controller-manager-694c6dcf95-58s6k\" (UID: \"73527aaf-5de3-4a3e-aa4c-f2ac98e5be11\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.443550 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pqf\" (UniqueName: \"kubernetes.io/projected/82a19f5f-9a94-4b08-8795-22fce21897bf-kube-api-access-l8pqf\") pod \"ovn-operator-controller-manager-788c46999f-6f7vv\" (UID: \"82a19f5f-9a94-4b08-8795-22fce21897bf\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.445254 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.461359 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.461807 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2bfdd" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.499861 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq5km\" (UniqueName: \"kubernetes.io/projected/73527aaf-5de3-4a3e-aa4c-f2ac98e5be11-kube-api-access-vq5km\") pod \"octavia-operator-controller-manager-694c6dcf95-58s6k\" (UID: \"73527aaf-5de3-4a3e-aa4c-f2ac98e5be11\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.511914 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs7tx\" (UniqueName: \"kubernetes.io/projected/1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7-kube-api-access-vs7tx\") pod \"nova-operator-controller-manager-5644b66645-lz8dw\" (UID: \"1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.516093 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.528458 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.532241 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9nkdp" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.542939 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.544460 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.555413 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zxtwh" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.564320 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.564479 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8pqf\" (UniqueName: \"kubernetes.io/projected/82a19f5f-9a94-4b08-8795-22fce21897bf-kube-api-access-l8pqf\") pod \"ovn-operator-controller-manager-788c46999f-6f7vv\" (UID: \"82a19f5f-9a94-4b08-8795-22fce21897bf\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.564647 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqbc\" (UniqueName: \"kubernetes.io/projected/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-kube-api-access-tkqbc\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.577148 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.585465 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.586596 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.592819 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dnmxb" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.601470 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.625730 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.639889 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8pqf\" (UniqueName: \"kubernetes.io/projected/82a19f5f-9a94-4b08-8795-22fce21897bf-kube-api-access-l8pqf\") pod \"ovn-operator-controller-manager-788c46999f-6f7vv\" (UID: \"82a19f5f-9a94-4b08-8795-22fce21897bf\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.652263 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.667446 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqbc\" (UniqueName: \"kubernetes.io/projected/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-kube-api-access-tkqbc\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.667966 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n9js\" (UniqueName: \"kubernetes.io/projected/bf959f71-8af9-4121-888f-13207cc2e1d0-kube-api-access-9n9js\") pod \"telemetry-operator-controller-manager-69484b8d9d-nc5fg\" (UID: \"bf959f71-8af9-4121-888f-13207cc2e1d0\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.668043 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frr8f\" (UniqueName: \"kubernetes.io/projected/c15b97e5-3fe4-4f42-9501-b4c7c083bdbb-kube-api-access-frr8f\") pod \"swift-operator-controller-manager-566d8d7445-78f4b\" (UID: \"c15b97e5-3fe4-4f42-9501-b4c7c083bdbb\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.668089 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.668111 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sztq4\" (UniqueName: \"kubernetes.io/projected/cf2e278a-e0cb-4505-bd08-38c02155a632-kube-api-access-sztq4\") pod \"placement-operator-controller-manager-5b964cf4cd-7f98k\" (UID: \"cf2e278a-e0cb-4505-bd08-38c02155a632\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.669665 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.669749 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:58:57.169726056 +0000 UTC m=+1133.130973099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.685409 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.704274 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.708229 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqbc\" (UniqueName: \"kubernetes.io/projected/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-kube-api-access-tkqbc\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.711355 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.728559 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.749210 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.750517 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.753744 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8m6mf" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.773096 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n9js\" (UniqueName: \"kubernetes.io/projected/bf959f71-8af9-4121-888f-13207cc2e1d0-kube-api-access-9n9js\") pod \"telemetry-operator-controller-manager-69484b8d9d-nc5fg\" (UID: \"bf959f71-8af9-4121-888f-13207cc2e1d0\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.773207 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frr8f\" (UniqueName: \"kubernetes.io/projected/c15b97e5-3fe4-4f42-9501-b4c7c083bdbb-kube-api-access-frr8f\") pod \"swift-operator-controller-manager-566d8d7445-78f4b\" (UID: \"c15b97e5-3fe4-4f42-9501-b4c7c083bdbb\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.773272 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48t77\" (UniqueName: \"kubernetes.io/projected/baa9dff2-93f9-4590-a86d-cd891b4273f2-kube-api-access-48t77\") pod \"test-operator-controller-manager-56f8bfcd9f-57br8\" (UID: \"baa9dff2-93f9-4590-a86d-cd891b4273f2\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.773312 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sztq4\" (UniqueName: \"kubernetes.io/projected/cf2e278a-e0cb-4505-bd08-38c02155a632-kube-api-access-sztq4\") pod \"placement-operator-controller-manager-5b964cf4cd-7f98k\" (UID: \"cf2e278a-e0cb-4505-bd08-38c02155a632\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.787147 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.815082 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n9js\" (UniqueName: \"kubernetes.io/projected/bf959f71-8af9-4121-888f-13207cc2e1d0-kube-api-access-9n9js\") pod \"telemetry-operator-controller-manager-69484b8d9d-nc5fg\" (UID: \"bf959f71-8af9-4121-888f-13207cc2e1d0\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.821120 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.822365 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.829193 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frr8f\" (UniqueName: \"kubernetes.io/projected/c15b97e5-3fe4-4f42-9501-b4c7c083bdbb-kube-api-access-frr8f\") pod \"swift-operator-controller-manager-566d8d7445-78f4b\" (UID: \"c15b97e5-3fe4-4f42-9501-b4c7c083bdbb\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.832175 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mf7d7" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.833715 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sztq4\" (UniqueName: \"kubernetes.io/projected/cf2e278a-e0cb-4505-bd08-38c02155a632-kube-api-access-sztq4\") pod \"placement-operator-controller-manager-5b964cf4cd-7f98k\" (UID: \"cf2e278a-e0cb-4505-bd08-38c02155a632\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.851114 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.868525 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.870108 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.873535 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.873905 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txjl\" (UniqueName: \"kubernetes.io/projected/2487dbd3-ca49-4b26-99e3-2c858b549944-kube-api-access-2txjl\") pod \"watcher-operator-controller-manager-586b95b788-dpkrg\" (UID: \"2487dbd3-ca49-4b26-99e3-2c858b549944\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.873946 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48t77\" (UniqueName: \"kubernetes.io/projected/baa9dff2-93f9-4590-a86d-cd891b4273f2-kube-api-access-48t77\") pod \"test-operator-controller-manager-56f8bfcd9f-57br8\" (UID: \"baa9dff2-93f9-4590-a86d-cd891b4273f2\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.873979 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.874050 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xwc8\" (UniqueName: \"kubernetes.io/projected/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-kube-api-access-5xwc8\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.874129 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.888051 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.888286 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.888434 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-k8hfr" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.916798 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48t77\" (UniqueName: \"kubernetes.io/projected/baa9dff2-93f9-4590-a86d-cd891b4273f2-kube-api-access-48t77\") pod \"test-operator-controller-manager-56f8bfcd9f-57br8\" (UID: \"baa9dff2-93f9-4590-a86d-cd891b4273f2\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.922498 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.923052 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.923766 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.936731 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-8w5lt" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.954187 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.971668 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8"] Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.986299 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2txjl\" (UniqueName: \"kubernetes.io/projected/2487dbd3-ca49-4b26-99e3-2c858b549944-kube-api-access-2txjl\") pod \"watcher-operator-controller-manager-586b95b788-dpkrg\" (UID: \"2487dbd3-ca49-4b26-99e3-2c858b549944\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.986411 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.986474 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xwc8\" (UniqueName: \"kubernetes.io/projected/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-kube-api-access-5xwc8\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: I0130 21:58:56.986519 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.986725 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.986801 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:57.486775348 +0000 UTC m=+1133.448022381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.990187 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:58:56 crc kubenswrapper[4979]: E0130 21:58:56.998277 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:57.496298516 +0000 UTC m=+1133.457545549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.030987 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2txjl\" (UniqueName: \"kubernetes.io/projected/2487dbd3-ca49-4b26-99e3-2c858b549944-kube-api-access-2txjl\") pod \"watcher-operator-controller-manager-586b95b788-dpkrg\" (UID: \"2487dbd3-ca49-4b26-99e3-2c858b549944\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.031731 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.031871 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xwc8\" (UniqueName: \"kubernetes.io/projected/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-kube-api-access-5xwc8\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.046472 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.088563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czf24\" (UniqueName: \"kubernetes.io/projected/788f4d92-590f-44b1-8b93-a15b9f88b052-kube-api-access-czf24\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r4rcx\" (UID: \"788f4d92-590f-44b1-8b93-a15b9f88b052\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.090514 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.125550 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.177360 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.191253 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.191378 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czf24\" (UniqueName: \"kubernetes.io/projected/788f4d92-590f-44b1-8b93-a15b9f88b052-kube-api-access-czf24\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r4rcx\" (UID: \"788f4d92-590f-44b1-8b93-a15b9f88b052\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.191915 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.191967 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:58:58.191948536 +0000 UTC m=+1134.153195569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.233436 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czf24\" (UniqueName: \"kubernetes.io/projected/788f4d92-590f-44b1-8b93-a15b9f88b052-kube-api-access-czf24\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r4rcx\" (UID: \"788f4d92-590f-44b1-8b93-a15b9f88b052\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.246937 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.272742 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" event={"ID":"dcd08638-857d-40cd-a92c-b6dcef0bc329","Type":"ContainerStarted","Data":"66ec391d58e9960ec214debe0d680ccf5cb75d3e9c5a3e678db1222022950789"} Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.293741 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.294308 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.294361 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:59.294343344 +0000 UTC m=+1135.255590377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.298996 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" event={"ID":"11771b88-abd2-436e-a95c-5113a5bae88b","Type":"ContainerStarted","Data":"cc46f58c8018b2ead10d44569d9ab7afbb58be368ff92e05e6b217db3c7973f5"} Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.339455 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd"] Jan 30 21:58:57 crc kubenswrapper[4979]: W0130 21:58:57.359947 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9134e6d2_b638_49be_9612_be12250e0a6d.slice/crio-6cadfdbebc377bbac4fde9574f506238659aab1392b80b29b437a5bed88e2f8e WatchSource:0}: Error finding container 6cadfdbebc377bbac4fde9574f506238659aab1392b80b29b437a5bed88e2f8e: Status 404 returned error can't find the container with id 6cadfdbebc377bbac4fde9574f506238659aab1392b80b29b437a5bed88e2f8e Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.496348 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.496631 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.496780 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:58.496753007 +0000 UTC m=+1134.458000040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.601168 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.601571 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: E0130 21:58:57.601683 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:58:58.601653883 +0000 UTC m=+1134.562900916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.720145 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.728477 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.741423 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.754155 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.777302 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw"] Jan 30 21:58:57 crc kubenswrapper[4979]: W0130 21:58:57.806271 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe4c32c_a00c_41e9_a15d_d1ff4cedf9f7.slice/crio-898b20b008ed7e95b028a796741dc405a00dfd116a65e070234b6e586204c01d WatchSource:0}: Error finding container 898b20b008ed7e95b028a796741dc405a00dfd116a65e070234b6e586204c01d: Status 404 returned error can't find the container with id 898b20b008ed7e95b028a796741dc405a00dfd116a65e070234b6e586204c01d Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.842427 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.876733 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt"] Jan 30 21:58:57 crc kubenswrapper[4979]: I0130 21:58:57.909230 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.134409 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.152219 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.158790 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.175327 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b"] Jan 30 21:58:58 crc kubenswrapper[4979]: W0130 21:58:58.178829 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2487dbd3_ca49_4b26_99e3_2c858b549944.slice/crio-2e02eda3a9a37c7c61bf31b8228285e6b92852e09866a4d0a087ff8b630d3b5b WatchSource:0}: Error finding container 2e02eda3a9a37c7c61bf31b8228285e6b92852e09866a4d0a087ff8b630d3b5b: Status 404 returned error can't find the container with id 2e02eda3a9a37c7c61bf31b8228285e6b92852e09866a4d0a087ff8b630d3b5b Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.231627 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.231831 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.231888 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:59:00.231865562 +0000 UTC m=+1136.193112595 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.235538 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv"] Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.245271 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sq8kz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-576995988b-v774d_openstack-operators(31481495-f181-449a-887e-ed58bf88c783): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:58:58 crc kubenswrapper[4979]: W0130 21:58:58.259195 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc15b97e5_3fe4_4f42_9501_b4c7c083bdbb.slice/crio-e3154ff4a4943e5dfc2aa5ad48213e932a31763bde20e0b7e6ee93d620d294b5 WatchSource:0}: Error finding container e3154ff4a4943e5dfc2aa5ad48213e932a31763bde20e0b7e6ee93d620d294b5: Status 404 returned error can't find the container with id e3154ff4a4943e5dfc2aa5ad48213e932a31763bde20e0b7e6ee93d620d294b5 Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.259631 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" podUID="31481495-f181-449a-887e-ed58bf88c783" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.268869 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-48t77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-57br8_openstack-operators(baa9dff2-93f9-4590-a86d-cd891b4273f2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.269992 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" podUID="baa9dff2-93f9-4590-a86d-cd891b4273f2" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.276093 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9n9js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-69484b8d9d-nc5fg_openstack-operators(bf959f71-8af9-4121-888f-13207cc2e1d0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.277409 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" podUID="bf959f71-8af9-4121-888f-13207cc2e1d0" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.279340 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frr8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-566d8d7445-78f4b_openstack-operators(c15b97e5-3fe4-4f42-9501-b4c7c083bdbb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.279451 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czf24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-r4rcx_openstack-operators(788f4d92-590f-44b1-8b93-a15b9f88b052): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.281324 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" podUID="788f4d92-590f-44b1-8b93-a15b9f88b052" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.281383 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" podUID="c15b97e5-3fe4-4f42-9501-b4c7c083bdbb" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.291133 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-v774d"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.296435 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.302092 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.306732 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8"] Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.333376 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" event={"ID":"2487dbd3-ca49-4b26-99e3-2c858b549944","Type":"ContainerStarted","Data":"2e02eda3a9a37c7c61bf31b8228285e6b92852e09866a4d0a087ff8b630d3b5b"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.335254 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" event={"ID":"7f396cc2-4739-4401-9319-36881d4f449d","Type":"ContainerStarted","Data":"7be67aa680ba79c1b74dd25821f43e826dffdbb846c327129109e20b7f91af66"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.343569 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" event={"ID":"9134e6d2-b638-49be-9612-be12250e0a6d","Type":"ContainerStarted","Data":"6cadfdbebc377bbac4fde9574f506238659aab1392b80b29b437a5bed88e2f8e"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.360793 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" event={"ID":"777d41f5-6e7f-4099-9f6f-aceaf0b972da","Type":"ContainerStarted","Data":"8bd7ed684f108dcf5a99e700affd35c3c80dade7051363d2b9848288b2063b1c"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.376572 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" event={"ID":"07393de3-4dbb-4de1-a7fc-49785a623de2","Type":"ContainerStarted","Data":"e6e0db1dcd1f9230375556cfa1847411cf2025f53695bc7e8ee5b66539a20a92"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.382920 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" event={"ID":"73527aaf-5de3-4a3e-aa4c-f2ac98e5be11","Type":"ContainerStarted","Data":"0389c19893a55c30c8198d9fcc9a37e00975ba89b9f1e565e2d6969926bdc40f"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.384467 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" event={"ID":"31481495-f181-449a-887e-ed58bf88c783","Type":"ContainerStarted","Data":"f3920d87c0013621cd7f4beb506abbc73eefda33aad2b3719f1075e00ee8cbca"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.385646 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" event={"ID":"39f45c61-20b7-4d98-98af-526018a240c1","Type":"ContainerStarted","Data":"36be41324561d5f7c7cc3a1f1f888e7f40e694d4cee80ac2e087a5e23901cc01"} Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.386386 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" podUID="31481495-f181-449a-887e-ed58bf88c783" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.395417 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" event={"ID":"0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc","Type":"ContainerStarted","Data":"e9ee1225fae6ed01d9e7ce06f65effe310b78668cfb2f0f603c6514f7d2483db"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.398672 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" event={"ID":"cf2e278a-e0cb-4505-bd08-38c02155a632","Type":"ContainerStarted","Data":"9243f3624479c7f135541459a276bbd2f0984ed35dbb5738fcfa1d1ad390c85d"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.411433 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" event={"ID":"c15b97e5-3fe4-4f42-9501-b4c7c083bdbb","Type":"ContainerStarted","Data":"e3154ff4a4943e5dfc2aa5ad48213e932a31763bde20e0b7e6ee93d620d294b5"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.413676 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" event={"ID":"8893a935-e9c7-4d38-ae0c-17a94445475f","Type":"ContainerStarted","Data":"c75785069466c39270087e5589b4eab54cf4d089bf564b4fae7e9c9fcd62a0b2"} Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.413717 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011\\\"\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" podUID="c15b97e5-3fe4-4f42-9501-b4c7c083bdbb" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.417170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" event={"ID":"788f4d92-590f-44b1-8b93-a15b9f88b052","Type":"ContainerStarted","Data":"0a09e2a4ce740e64ad18e4734d7481e7ce3d91c3faf8386f5888144565151049"} Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.419386 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" podUID="788f4d92-590f-44b1-8b93-a15b9f88b052" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.420993 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" event={"ID":"bf959f71-8af9-4121-888f-13207cc2e1d0","Type":"ContainerStarted","Data":"067da97437fc8c4db88207b02e46078368819cde1916f92230da12f482ed30c0"} Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.422561 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" podUID="bf959f71-8af9-4121-888f-13207cc2e1d0" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.423501 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" event={"ID":"9c8cf87b-4069-497d-9fcc-3b7be476ed4d","Type":"ContainerStarted","Data":"7205429c3d378bbd8b8fd00e2e77b72ee93087fe8a8cc1e66e97c5c681686793"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.425076 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" event={"ID":"82a19f5f-9a94-4b08-8795-22fce21897bf","Type":"ContainerStarted","Data":"84821abbda03f7cf84c7cc1354b2da6b877de962ea39b4e333f2061ce74f303a"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.427422 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" event={"ID":"1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7","Type":"ContainerStarted","Data":"898b20b008ed7e95b028a796741dc405a00dfd116a65e070234b6e586204c01d"} Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.445699 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" event={"ID":"baa9dff2-93f9-4590-a86d-cd891b4273f2","Type":"ContainerStarted","Data":"80ab1bd9d1ba102d454a9adac545f2464ffcc2dfdbea82c7a3182d562cceb443"} Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.447444 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" podUID="baa9dff2-93f9-4590-a86d-cd891b4273f2" Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.545308 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.545589 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.545662 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:00.545640267 +0000 UTC m=+1136.506887300 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:58:58 crc kubenswrapper[4979]: I0130 21:58:58.646652 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.647369 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:58:58 crc kubenswrapper[4979]: E0130 21:58:58.647431 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:00.647410808 +0000 UTC m=+1136.608657841 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:58:59 crc kubenswrapper[4979]: I0130 21:58:59.389719 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.389997 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.390160 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:03.39012302 +0000 UTC m=+1139.351370213 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.479075 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" podUID="baa9dff2-93f9-4590-a86d-cd891b4273f2" Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.479279 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" podUID="31481495-f181-449a-887e-ed58bf88c783" Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.479341 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" podUID="788f4d92-590f-44b1-8b93-a15b9f88b052" Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.480090 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" podUID="bf959f71-8af9-4121-888f-13207cc2e1d0" Jan 30 21:58:59 crc kubenswrapper[4979]: E0130 21:58:59.480165 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011\\\"\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" podUID="c15b97e5-3fe4-4f42-9501-b4c7c083bdbb" Jan 30 21:59:00 crc kubenswrapper[4979]: I0130 21:59:00.315376 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.316058 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.316178 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:59:04.316158117 +0000 UTC m=+1140.277405140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:00 crc kubenswrapper[4979]: I0130 21:59:00.620249 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.620511 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.620606 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:04.620585469 +0000 UTC m=+1140.581832502 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:59:00 crc kubenswrapper[4979]: I0130 21:59:00.722618 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.722800 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:59:00 crc kubenswrapper[4979]: E0130 21:59:00.722928 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:04.722902895 +0000 UTC m=+1140.684149928 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.039495 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.040044 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.040115 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.041043 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.041114 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff" gracePeriod=600 Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.506643 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff" exitCode=0 Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.506696 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff"} Jan 30 21:59:02 crc kubenswrapper[4979]: I0130 21:59:02.506737 4979 scope.go:117] "RemoveContainer" containerID="ae293b4c8eb11a00dbc67116c5050f26eebdb7d47b98e26880adeb06c2d3bf28" Jan 30 21:59:03 crc kubenswrapper[4979]: I0130 21:59:03.466558 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:03 crc kubenswrapper[4979]: E0130 21:59:03.466826 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:59:03 crc kubenswrapper[4979]: E0130 21:59:03.466932 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:11.466911335 +0000 UTC m=+1147.428158368 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: I0130 21:59:04.381206 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.381465 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.381773 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:59:12.381744721 +0000 UTC m=+1148.342991754 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: I0130 21:59:04.687172 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.687343 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.687435 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:12.687414505 +0000 UTC m=+1148.648661538 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: I0130 21:59:04.788672 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.788874 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:59:04 crc kubenswrapper[4979]: E0130 21:59:04.788956 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:12.78893087 +0000 UTC m=+1148.750177903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:59:10 crc kubenswrapper[4979]: E0130 21:59:10.990537 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c" Jan 30 21:59:10 crc kubenswrapper[4979]: E0130 21:59:10.991618 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8x6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7d96d95959-5s8xm_openstack-operators(7f396cc2-4739-4401-9319-36881d4f449d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:59:10 crc kubenswrapper[4979]: E0130 21:59:10.992816 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" podUID="7f396cc2-4739-4401-9319-36881d4f449d" Jan 30 21:59:11 crc kubenswrapper[4979]: I0130 21:59:11.506336 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:11 crc kubenswrapper[4979]: E0130 21:59:11.506624 4979 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:59:11 crc kubenswrapper[4979]: E0130 21:59:11.506780 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert podName:5966d922-4db9-40f7-baf1-5624f1a033d6 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:27.506742675 +0000 UTC m=+1163.467989758 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert") pod "infra-operator-controller-manager-79955696d6-9q469" (UID: "5966d922-4db9-40f7-baf1-5624f1a033d6") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:59:11 crc kubenswrapper[4979]: E0130 21:59:11.619599 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" podUID="7f396cc2-4739-4401-9319-36881d4f449d" Jan 30 21:59:12 crc kubenswrapper[4979]: I0130 21:59:12.420130 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.420324 4979 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.420399 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert podName:c9710f6a-7b47-4f62-bc11-9d5727fdb01f nodeName:}" failed. No retries permitted until 2026-01-30 21:59:28.420378097 +0000 UTC m=+1164.381625130 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" (UID: "c9710f6a-7b47-4f62-bc11-9d5727fdb01f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:59:12 crc kubenswrapper[4979]: I0130 21:59:12.726136 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.726500 4979 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.726704 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:28.726595846 +0000 UTC m=+1164.687842919 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "metrics-server-cert" not found Jan 30 21:59:12 crc kubenswrapper[4979]: I0130 21:59:12.828822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.829156 4979 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:59:12 crc kubenswrapper[4979]: E0130 21:59:12.829273 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs podName:cea237e7-6ca9-4dcd-b5d6-d471898e2c09 nodeName:}" failed. No retries permitted until 2026-01-30 21:59:28.829239741 +0000 UTC m=+1164.790486814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs") pod "openstack-operator-controller-manager-5b5794dddd-fgq92" (UID: "cea237e7-6ca9-4dcd-b5d6-d471898e2c09") : secret "webhook-server-cert" not found Jan 30 21:59:20 crc kubenswrapper[4979]: E0130 21:59:20.083827 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 30 21:59:20 crc kubenswrapper[4979]: E0130 21:59:20.084942 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8h6z9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-6bb56_openstack-operators(777d41f5-6e7f-4099-9f6f-aceaf0b972da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:59:20 crc kubenswrapper[4979]: E0130 21:59:20.086123 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" podUID="777d41f5-6e7f-4099-9f6f-aceaf0b972da" Jan 30 21:59:20 crc kubenswrapper[4979]: I0130 21:59:20.690960 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c"} Jan 30 21:59:20 crc kubenswrapper[4979]: I0130 21:59:20.695874 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" event={"ID":"9c8cf87b-4069-497d-9fcc-3b7be476ed4d","Type":"ContainerStarted","Data":"a94eccfc7a3c43517234031c1637215147533ee44bb3c9a4aaf2284329686b25"} Jan 30 21:59:20 crc kubenswrapper[4979]: I0130 21:59:20.695932 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:59:20 crc kubenswrapper[4979]: E0130 21:59:20.698208 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" podUID="777d41f5-6e7f-4099-9f6f-aceaf0b972da" Jan 30 21:59:20 crc kubenswrapper[4979]: I0130 21:59:20.743353 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" podStartSLOduration=3.288672078 podStartE2EDuration="25.74332449s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.806166482 +0000 UTC m=+1133.767413515" lastFinishedPulling="2026-01-30 21:59:20.260818894 +0000 UTC m=+1156.222065927" observedRunningTime="2026-01-30 21:59:20.742088227 +0000 UTC m=+1156.703335260" watchObservedRunningTime="2026-01-30 21:59:20.74332449 +0000 UTC m=+1156.704571523" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.730663 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" event={"ID":"1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7","Type":"ContainerStarted","Data":"fafda2ccb236b70bec9150792062e9a0576972dc1d0ea3b11e870514b11ebbdc"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.732521 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.746775 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" event={"ID":"11771b88-abd2-436e-a95c-5113a5bae88b","Type":"ContainerStarted","Data":"148f670e2891d921d4b7bdc19541dc4ac3c7efae836d6d1127acbfe8c825f3bc"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.746947 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.782302 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" event={"ID":"dcd08638-857d-40cd-a92c-b6dcef0bc329","Type":"ContainerStarted","Data":"734b7f7622002596a65f09e60c75ba4aa63a9e4eb02eb0c3a268d5bb0745989d"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.782871 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.803071 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" event={"ID":"07393de3-4dbb-4de1-a7fc-49785a623de2","Type":"ContainerStarted","Data":"0a7f7cd6ea14a9f0accb348f49a38e95327b951156eada009517b3d8b5cca9a3"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.803414 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.819427 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" podStartSLOduration=4.349214403 podStartE2EDuration="26.819404965s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.811462386 +0000 UTC m=+1133.772709419" lastFinishedPulling="2026-01-30 21:59:20.281652948 +0000 UTC m=+1156.242899981" observedRunningTime="2026-01-30 21:59:21.814714498 +0000 UTC m=+1157.775961531" watchObservedRunningTime="2026-01-30 21:59:21.819404965 +0000 UTC m=+1157.780651988" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.823339 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" event={"ID":"8893a935-e9c7-4d38-ae0c-17a94445475f","Type":"ContainerStarted","Data":"ebd433217afa9acce4a52eeae57a99ec5deed2dc3f89bdbcc4b519c59df39d0b"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.824165 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.854779 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" event={"ID":"39f45c61-20b7-4d98-98af-526018a240c1","Type":"ContainerStarted","Data":"81e67d28011955ad1ef4c8797df5606e2475e65b7ba2d9516c364c2e26aaab3a"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.855324 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.878784 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" podStartSLOduration=3.705911398 podStartE2EDuration="26.878758079s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.087950893 +0000 UTC m=+1133.049197926" lastFinishedPulling="2026-01-30 21:59:20.260797524 +0000 UTC m=+1156.222044607" observedRunningTime="2026-01-30 21:59:21.875494061 +0000 UTC m=+1157.836741094" watchObservedRunningTime="2026-01-30 21:59:21.878758079 +0000 UTC m=+1157.840005112" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.888200 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" event={"ID":"0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc","Type":"ContainerStarted","Data":"5d822d11486e4199454ccb1a78bf6661761feec388c6b26beed488725d4f8fdb"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.888382 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.918739 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" event={"ID":"2487dbd3-ca49-4b26-99e3-2c858b549944","Type":"ContainerStarted","Data":"bbaf76139ef03473391138318100a70cef117e9012be4b7c59bb241f35603776"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.918835 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.921054 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" podStartSLOduration=3.698337055 podStartE2EDuration="26.921005832s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.038791234 +0000 UTC m=+1133.000038257" lastFinishedPulling="2026-01-30 21:59:20.261459991 +0000 UTC m=+1156.222707034" observedRunningTime="2026-01-30 21:59:21.913253872 +0000 UTC m=+1157.874500895" watchObservedRunningTime="2026-01-30 21:59:21.921005832 +0000 UTC m=+1157.882252865" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.932614 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" event={"ID":"73527aaf-5de3-4a3e-aa4c-f2ac98e5be11","Type":"ContainerStarted","Data":"a267f4c4f8adc8f092ba404528b45428db7ce12bd81c5ca3b75c5f46c15eb392"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.933601 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.940549 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" event={"ID":"9134e6d2-b638-49be-9612-be12250e0a6d","Type":"ContainerStarted","Data":"ae6aa0e568e734f58e8a2537e452a9844185645beeb04db2212a04d41daacf8f"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.941272 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.948372 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" event={"ID":"82a19f5f-9a94-4b08-8795-22fce21897bf","Type":"ContainerStarted","Data":"fd8c6207905112db605632c95166aa9c01f9a387c36d06a59cb86ff90aa82113"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.949238 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.983520 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" event={"ID":"cf2e278a-e0cb-4505-bd08-38c02155a632","Type":"ContainerStarted","Data":"ed4bcbc6a95d839bdc6a016f39a72c92c62cd8152ee8a9c1d5b73d2469dc0d51"} Jan 30 21:59:21 crc kubenswrapper[4979]: I0130 21:59:21.983713 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.004804 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" podStartSLOduration=4.4694997149999995 podStartE2EDuration="27.004776746s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.747887187 +0000 UTC m=+1133.709134220" lastFinishedPulling="2026-01-30 21:59:20.283164218 +0000 UTC m=+1156.244411251" observedRunningTime="2026-01-30 21:59:22.002391173 +0000 UTC m=+1157.963638206" watchObservedRunningTime="2026-01-30 21:59:22.004776746 +0000 UTC m=+1157.966023779" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.005227 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" podStartSLOduration=4.614760744 podStartE2EDuration="27.005214829s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.892711683 +0000 UTC m=+1133.853958716" lastFinishedPulling="2026-01-30 21:59:20.283165768 +0000 UTC m=+1156.244412801" observedRunningTime="2026-01-30 21:59:21.973392868 +0000 UTC m=+1157.934639901" watchObservedRunningTime="2026-01-30 21:59:22.005214829 +0000 UTC m=+1157.966461862" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.047087 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" podStartSLOduration=4.652301209 podStartE2EDuration="27.047062901s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.866173415 +0000 UTC m=+1133.827420448" lastFinishedPulling="2026-01-30 21:59:20.260935097 +0000 UTC m=+1156.222182140" observedRunningTime="2026-01-30 21:59:22.043859843 +0000 UTC m=+1158.005106896" watchObservedRunningTime="2026-01-30 21:59:22.047062901 +0000 UTC m=+1158.008309934" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.082758 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" podStartSLOduration=4.189387632 podStartE2EDuration="27.082734705s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.37186351 +0000 UTC m=+1133.333110543" lastFinishedPulling="2026-01-30 21:59:20.265210583 +0000 UTC m=+1156.226457616" observedRunningTime="2026-01-30 21:59:22.082217551 +0000 UTC m=+1158.043464594" watchObservedRunningTime="2026-01-30 21:59:22.082734705 +0000 UTC m=+1158.043981738" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.115552 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" podStartSLOduration=5.070241238 podStartE2EDuration="27.115527751s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.235873151 +0000 UTC m=+1134.197120184" lastFinishedPulling="2026-01-30 21:59:20.281159664 +0000 UTC m=+1156.242406697" observedRunningTime="2026-01-30 21:59:22.112787928 +0000 UTC m=+1158.074034961" watchObservedRunningTime="2026-01-30 21:59:22.115527751 +0000 UTC m=+1158.076774784" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.155869 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" podStartSLOduration=4.644234039 podStartE2EDuration="27.155843381s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.771939997 +0000 UTC m=+1133.733187030" lastFinishedPulling="2026-01-30 21:59:20.283549299 +0000 UTC m=+1156.244796372" observedRunningTime="2026-01-30 21:59:22.141791041 +0000 UTC m=+1158.103038074" watchObservedRunningTime="2026-01-30 21:59:22.155843381 +0000 UTC m=+1158.117090414" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.165622 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" podStartSLOduration=4.077449303 podStartE2EDuration="26.165603195s" podCreationTimestamp="2026-01-30 21:58:56 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.193159376 +0000 UTC m=+1134.154406409" lastFinishedPulling="2026-01-30 21:59:20.281313248 +0000 UTC m=+1156.242560301" observedRunningTime="2026-01-30 21:59:22.165247786 +0000 UTC m=+1158.126494809" watchObservedRunningTime="2026-01-30 21:59:22.165603195 +0000 UTC m=+1158.126850218" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.205595 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" podStartSLOduration=5.089609302 podStartE2EDuration="27.205568406s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.162921559 +0000 UTC m=+1134.124168582" lastFinishedPulling="2026-01-30 21:59:20.278880653 +0000 UTC m=+1156.240127686" observedRunningTime="2026-01-30 21:59:22.198668489 +0000 UTC m=+1158.159915522" watchObservedRunningTime="2026-01-30 21:59:22.205568406 +0000 UTC m=+1158.166815439" Jan 30 21:59:22 crc kubenswrapper[4979]: I0130 21:59:22.231079 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" podStartSLOduration=5.12760936 podStartE2EDuration="27.231048405s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.191193983 +0000 UTC m=+1134.152441016" lastFinishedPulling="2026-01-30 21:59:20.294632988 +0000 UTC m=+1156.255880061" observedRunningTime="2026-01-30 21:59:22.224338123 +0000 UTC m=+1158.185585176" watchObservedRunningTime="2026-01-30 21:59:22.231048405 +0000 UTC m=+1158.192295438" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.506153 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-r2mb8" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.515471 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-p95sz" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.545351 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-5k7wd" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.841381 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-zqjfh" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.919712 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-h59f2" Jan 30 21:59:25 crc kubenswrapper[4979]: I0130 21:59:25.991692 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-5pmpx" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.029899 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-lrqnv" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.410062 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-g6pnt" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.581201 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5644b66645-lz8dw" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.692848 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-58s6k" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.716381 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-6f7vv" Jan 30 21:59:26 crc kubenswrapper[4979]: I0130 21:59:26.926839 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7f98k" Jan 30 21:59:27 crc kubenswrapper[4979]: I0130 21:59:27.181253 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dpkrg" Jan 30 21:59:27 crc kubenswrapper[4979]: I0130 21:59:27.563527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:27 crc kubenswrapper[4979]: I0130 21:59:27.571824 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5966d922-4db9-40f7-baf1-5624f1a033d6-cert\") pod \"infra-operator-controller-manager-79955696d6-9q469\" (UID: \"5966d922-4db9-40f7-baf1-5624f1a033d6\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:27 crc kubenswrapper[4979]: I0130 21:59:27.756350 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x2mf7" Jan 30 21:59:27 crc kubenswrapper[4979]: I0130 21:59:27.764636 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.042926 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" event={"ID":"baa9dff2-93f9-4590-a86d-cd891b4273f2","Type":"ContainerStarted","Data":"81ae69f1f4042b3f0b32cf931dd77c71b9c8eaed6e942a169a08e55203d5c127"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.043639 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.044571 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" event={"ID":"7f396cc2-4739-4401-9319-36881d4f449d","Type":"ContainerStarted","Data":"5e5fa102062b3213173b4b7028fc98f5078d10883255397a9069aa503c17f1ec"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.044878 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.047659 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" event={"ID":"c15b97e5-3fe4-4f42-9501-b4c7c083bdbb","Type":"ContainerStarted","Data":"075dd3f2e95c9837d79739c8021bfa7451815803fdc054bdcccf94a01e4c6eaa"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.047913 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.049158 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" event={"ID":"788f4d92-590f-44b1-8b93-a15b9f88b052","Type":"ContainerStarted","Data":"90753a5a0ae7bc9967f363c232d527fcd42792a4f40de2164c36c086150ba040"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.050825 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" event={"ID":"bf959f71-8af9-4121-888f-13207cc2e1d0","Type":"ContainerStarted","Data":"dd9829cf601b74c8c1136dabe4d47b911aaa6177c6e53144e68c6e727773c7aa"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.050997 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.054383 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" event={"ID":"31481495-f181-449a-887e-ed58bf88c783","Type":"ContainerStarted","Data":"d4a637bece9ddf4ea4b7ae2fb88dd2c6108ec36660b8882056b88ad6c796eeab"} Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.054632 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.064973 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" podStartSLOduration=3.016210617 podStartE2EDuration="32.064948597s" podCreationTimestamp="2026-01-30 21:58:56 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.26872418 +0000 UTC m=+1134.229971213" lastFinishedPulling="2026-01-30 21:59:27.31746216 +0000 UTC m=+1163.278709193" observedRunningTime="2026-01-30 21:59:28.062645645 +0000 UTC m=+1164.023892678" watchObservedRunningTime="2026-01-30 21:59:28.064948597 +0000 UTC m=+1164.026195620" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.113953 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" podStartSLOduration=4.046386115 podStartE2EDuration="33.113931725s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.279225473 +0000 UTC m=+1134.240472506" lastFinishedPulling="2026-01-30 21:59:27.346771083 +0000 UTC m=+1163.308018116" observedRunningTime="2026-01-30 21:59:28.11081158 +0000 UTC m=+1164.072058623" watchObservedRunningTime="2026-01-30 21:59:28.113931725 +0000 UTC m=+1164.075178758" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.161447 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" podStartSLOduration=3.604626945 podStartE2EDuration="33.161415552s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.800096709 +0000 UTC m=+1133.761343742" lastFinishedPulling="2026-01-30 21:59:27.356885316 +0000 UTC m=+1163.318132349" observedRunningTime="2026-01-30 21:59:28.14427913 +0000 UTC m=+1164.105526173" watchObservedRunningTime="2026-01-30 21:59:28.161415552 +0000 UTC m=+1164.122662585" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.163684 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" podStartSLOduration=4.092912536 podStartE2EDuration="33.163659892s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.275865243 +0000 UTC m=+1134.237112276" lastFinishedPulling="2026-01-30 21:59:27.346612589 +0000 UTC m=+1163.307859632" observedRunningTime="2026-01-30 21:59:28.161984877 +0000 UTC m=+1164.123231930" watchObservedRunningTime="2026-01-30 21:59:28.163659892 +0000 UTC m=+1164.124906925" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.219167 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r4rcx" podStartSLOduration=3.098054377 podStartE2EDuration="32.219145285s" podCreationTimestamp="2026-01-30 21:58:56 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.279394377 +0000 UTC m=+1134.240641410" lastFinishedPulling="2026-01-30 21:59:27.400485275 +0000 UTC m=+1163.361732318" observedRunningTime="2026-01-30 21:59:28.201297284 +0000 UTC m=+1164.162544327" watchObservedRunningTime="2026-01-30 21:59:28.219145285 +0000 UTC m=+1164.180392318" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.221502 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" podStartSLOduration=4.111169011 podStartE2EDuration="33.221493277s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:58.245001018 +0000 UTC m=+1134.206248061" lastFinishedPulling="2026-01-30 21:59:27.355325294 +0000 UTC m=+1163.316572327" observedRunningTime="2026-01-30 21:59:28.218109537 +0000 UTC m=+1164.179356570" watchObservedRunningTime="2026-01-30 21:59:28.221493277 +0000 UTC m=+1164.182740310" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.276312 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-9q469"] Jan 30 21:59:28 crc kubenswrapper[4979]: W0130 21:59:28.276524 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5966d922_4db9_40f7_baf1_5624f1a033d6.slice/crio-98129d44030759b860c8c1a76851b0d43a822ce6287df4d23515cb4f6ef3bd94 WatchSource:0}: Error finding container 98129d44030759b860c8c1a76851b0d43a822ce6287df4d23515cb4f6ef3bd94: Status 404 returned error can't find the container with id 98129d44030759b860c8c1a76851b0d43a822ce6287df4d23515cb4f6ef3bd94 Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.479508 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.487769 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9710f6a-7b47-4f62-bc11-9d5727fdb01f-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg\" (UID: \"c9710f6a-7b47-4f62-bc11-9d5727fdb01f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.624749 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2bfdd" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.633142 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.785315 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.790663 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-metrics-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.888615 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.893947 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cea237e7-6ca9-4dcd-b5d6-d471898e2c09-webhook-certs\") pod \"openstack-operator-controller-manager-5b5794dddd-fgq92\" (UID: \"cea237e7-6ca9-4dcd-b5d6-d471898e2c09\") " pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:28 crc kubenswrapper[4979]: I0130 21:59:28.975864 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg"] Jan 30 21:59:28 crc kubenswrapper[4979]: W0130 21:59:28.998191 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9710f6a_7b47_4f62_bc11_9d5727fdb01f.slice/crio-45ef96540cb8fa7d33bae91cb285048b3d603f0f79ce2613da333b5780122093 WatchSource:0}: Error finding container 45ef96540cb8fa7d33bae91cb285048b3d603f0f79ce2613da333b5780122093: Status 404 returned error can't find the container with id 45ef96540cb8fa7d33bae91cb285048b3d603f0f79ce2613da333b5780122093 Jan 30 21:59:29 crc kubenswrapper[4979]: I0130 21:59:29.003143 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-k8hfr" Jan 30 21:59:29 crc kubenswrapper[4979]: I0130 21:59:29.011019 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:29 crc kubenswrapper[4979]: I0130 21:59:29.063790 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" event={"ID":"c9710f6a-7b47-4f62-bc11-9d5727fdb01f","Type":"ContainerStarted","Data":"45ef96540cb8fa7d33bae91cb285048b3d603f0f79ce2613da333b5780122093"} Jan 30 21:59:29 crc kubenswrapper[4979]: I0130 21:59:29.108566 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" event={"ID":"5966d922-4db9-40f7-baf1-5624f1a033d6","Type":"ContainerStarted","Data":"98129d44030759b860c8c1a76851b0d43a822ce6287df4d23515cb4f6ef3bd94"} Jan 30 21:59:29 crc kubenswrapper[4979]: I0130 21:59:29.453113 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92"] Jan 30 21:59:29 crc kubenswrapper[4979]: W0130 21:59:29.455799 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcea237e7_6ca9_4dcd_b5d6_d471898e2c09.slice/crio-50a2b381774a32388ab90e578cff5a64f0221e3f4668557773170855b63ae035 WatchSource:0}: Error finding container 50a2b381774a32388ab90e578cff5a64f0221e3f4668557773170855b63ae035: Status 404 returned error can't find the container with id 50a2b381774a32388ab90e578cff5a64f0221e3f4668557773170855b63ae035 Jan 30 21:59:30 crc kubenswrapper[4979]: I0130 21:59:30.090609 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" event={"ID":"cea237e7-6ca9-4dcd-b5d6-d471898e2c09","Type":"ContainerStarted","Data":"981ed0823f263ddcf3b979b191a75600f478958046fec895b8f46f170294d758"} Jan 30 21:59:30 crc kubenswrapper[4979]: I0130 21:59:30.092600 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:30 crc kubenswrapper[4979]: I0130 21:59:30.092705 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" event={"ID":"cea237e7-6ca9-4dcd-b5d6-d471898e2c09","Type":"ContainerStarted","Data":"50a2b381774a32388ab90e578cff5a64f0221e3f4668557773170855b63ae035"} Jan 30 21:59:30 crc kubenswrapper[4979]: I0130 21:59:30.123399 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" podStartSLOduration=34.12336823 podStartE2EDuration="34.12336823s" podCreationTimestamp="2026-01-30 21:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:59:30.11517011 +0000 UTC m=+1166.076417153" watchObservedRunningTime="2026-01-30 21:59:30.12336823 +0000 UTC m=+1166.084615263" Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.108346 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" event={"ID":"c9710f6a-7b47-4f62-bc11-9d5727fdb01f","Type":"ContainerStarted","Data":"cf549d8ae18c8281fb0564c07c6307feea503b665416aaf74852a0c2b9347940"} Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.110939 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.114147 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" event={"ID":"5966d922-4db9-40f7-baf1-5624f1a033d6","Type":"ContainerStarted","Data":"00752eaee374f18148c6894a09c1ab2a3e538f8d15c67e73b793ee5386be6282"} Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.114194 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.142455 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" podStartSLOduration=34.71633242 podStartE2EDuration="37.142424695s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:59:29.001346016 +0000 UTC m=+1164.962593049" lastFinishedPulling="2026-01-30 21:59:31.427438291 +0000 UTC m=+1167.388685324" observedRunningTime="2026-01-30 21:59:32.139464746 +0000 UTC m=+1168.100711779" watchObservedRunningTime="2026-01-30 21:59:32.142424695 +0000 UTC m=+1168.103671718" Jan 30 21:59:32 crc kubenswrapper[4979]: I0130 21:59:32.166742 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" podStartSLOduration=34.020483201 podStartE2EDuration="37.166711028s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:59:28.282061167 +0000 UTC m=+1164.243308200" lastFinishedPulling="2026-01-30 21:59:31.428288994 +0000 UTC m=+1167.389536027" observedRunningTime="2026-01-30 21:59:32.160897722 +0000 UTC m=+1168.122144755" watchObservedRunningTime="2026-01-30 21:59:32.166711028 +0000 UTC m=+1168.127958061" Jan 30 21:59:36 crc kubenswrapper[4979]: I0130 21:59:36.146212 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" event={"ID":"777d41f5-6e7f-4099-9f6f-aceaf0b972da","Type":"ContainerStarted","Data":"6024b8df2cdd89d7404fff4df4498f7f6f89c1c0042f7aa5b8515da1ec3974ab"} Jan 30 21:59:36 crc kubenswrapper[4979]: I0130 21:59:36.147405 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:59:36 crc kubenswrapper[4979]: I0130 21:59:36.174529 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" podStartSLOduration=3.530367162 podStartE2EDuration="41.174497924s" podCreationTimestamp="2026-01-30 21:58:55 +0000 UTC" firstStartedPulling="2026-01-30 21:58:57.857559502 +0000 UTC m=+1133.818806535" lastFinishedPulling="2026-01-30 21:59:35.501690264 +0000 UTC m=+1171.462937297" observedRunningTime="2026-01-30 21:59:36.17025474 +0000 UTC m=+1172.131501773" watchObservedRunningTime="2026-01-30 21:59:36.174497924 +0000 UTC m=+1172.135744957" Jan 30 21:59:36 crc kubenswrapper[4979]: I0130 21:59:36.296716 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-5s8xm" Jan 30 21:59:36 crc kubenswrapper[4979]: I0130 21:59:36.656473 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-576995988b-v774d" Jan 30 21:59:37 crc kubenswrapper[4979]: I0130 21:59:37.035483 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-78f4b" Jan 30 21:59:37 crc kubenswrapper[4979]: I0130 21:59:37.093652 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-nc5fg" Jan 30 21:59:37 crc kubenswrapper[4979]: I0130 21:59:37.130516 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-57br8" Jan 30 21:59:37 crc kubenswrapper[4979]: I0130 21:59:37.773402 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9q469" Jan 30 21:59:38 crc kubenswrapper[4979]: I0130 21:59:38.640589 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg" Jan 30 21:59:39 crc kubenswrapper[4979]: I0130 21:59:39.021732 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b5794dddd-fgq92" Jan 30 21:59:46 crc kubenswrapper[4979]: I0130 21:59:46.364012 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-6bb56" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.508197 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.510759 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.515749 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.516083 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-dqnfn" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.516303 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.516468 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.536299 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.548166 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nnpz\" (UniqueName: \"kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.548247 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.600241 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.601730 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.604456 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.607796 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.650007 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4s8n\" (UniqueName: \"kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.650171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nnpz\" (UniqueName: \"kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.650203 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.650233 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.650263 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.651468 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.671669 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nnpz\" (UniqueName: \"kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz\") pod \"dnsmasq-dns-675f4bcbfc-hgv8v\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.750990 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.751082 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.751158 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4s8n\" (UniqueName: \"kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.752773 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.753175 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.782266 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4s8n\" (UniqueName: \"kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n\") pod \"dnsmasq-dns-78dd6ddcc-27kgx\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.837094 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 21:59:59 crc kubenswrapper[4979]: I0130 21:59:59.921840 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.185149 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4"] Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.186531 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.197868 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.197893 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.204709 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4"] Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.361201 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.361382 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qql6c\" (UniqueName: \"kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.361593 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.463324 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.463533 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qql6c\" (UniqueName: \"kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.463628 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.465877 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.470219 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.488902 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qql6c\" (UniqueName: \"kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c\") pod \"collect-profiles-29496840-tqcs4\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.539788 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.572162 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.579352 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 22:00:00 crc kubenswrapper[4979]: W0130 22:00:00.582227 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7326bcf_bcff_43db_a33c_04f37fbd0ad6.slice/crio-1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f WatchSource:0}: Error finding container 1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f: Status 404 returned error can't find the container with id 1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f Jan 30 22:00:00 crc kubenswrapper[4979]: W0130 22:00:00.583803 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de8c822_a2be_442f_af66_2e2b1991b947.slice/crio-e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb WatchSource:0}: Error finding container e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb: Status 404 returned error can't find the container with id e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.587734 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:00:00 crc kubenswrapper[4979]: I0130 22:00:00.972398 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4"] Jan 30 22:00:00 crc kubenswrapper[4979]: W0130 22:00:00.984657 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod365cfffa_828e_4f0e_9903_4c1580e20c67.slice/crio-4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48 WatchSource:0}: Error finding container 4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48: Status 404 returned error can't find the container with id 4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48 Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.361854 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" event={"ID":"365cfffa-828e-4f0e-9903-4c1580e20c67","Type":"ContainerStarted","Data":"63071af88423f456a45a4b58ad51314f65c32700ee4fa8a2ebb6bbca8fea7b68"} Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.362425 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" event={"ID":"365cfffa-828e-4f0e-9903-4c1580e20c67","Type":"ContainerStarted","Data":"4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48"} Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.366444 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" event={"ID":"8de8c822-a2be-442f-af66-2e2b1991b947","Type":"ContainerStarted","Data":"e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb"} Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.368131 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" event={"ID":"f7326bcf-bcff-43db-a33c-04f37fbd0ad6","Type":"ContainerStarted","Data":"1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f"} Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.385433 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" podStartSLOduration=1.385364343 podStartE2EDuration="1.385364343s" podCreationTimestamp="2026-01-30 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:00:01.384735287 +0000 UTC m=+1197.345982330" watchObservedRunningTime="2026-01-30 22:00:01.385364343 +0000 UTC m=+1197.346611366" Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.756687 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.789753 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.791218 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.803292 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.901634 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69j9n\" (UniqueName: \"kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.901726 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:01 crc kubenswrapper[4979]: I0130 22:00:01.901780 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.003506 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69j9n\" (UniqueName: \"kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.003564 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.003603 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.004648 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.005440 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.035052 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69j9n\" (UniqueName: \"kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n\") pod \"dnsmasq-dns-5ccc8479f9-2qnqn\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.127133 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.140638 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.165161 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.171691 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.183507 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.309064 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.309179 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75t5q\" (UniqueName: \"kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.309206 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.401538 4979 generic.go:334] "Generic (PLEG): container finished" podID="365cfffa-828e-4f0e-9903-4c1580e20c67" containerID="63071af88423f456a45a4b58ad51314f65c32700ee4fa8a2ebb6bbca8fea7b68" exitCode=0 Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.401592 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" event={"ID":"365cfffa-828e-4f0e-9903-4c1580e20c67","Type":"ContainerDied","Data":"63071af88423f456a45a4b58ad51314f65c32700ee4fa8a2ebb6bbca8fea7b68"} Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.411616 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.412604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.412816 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75t5q\" (UniqueName: \"kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.412845 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.413630 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.443804 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75t5q\" (UniqueName: \"kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q\") pod \"dnsmasq-dns-57d769cc4f-rvmv4\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.518604 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.564557 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.948007 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.950369 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.954006 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.954267 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.954532 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.954764 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gddkv" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.955017 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.961287 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.961562 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 22:00:02 crc kubenswrapper[4979]: I0130 22:00:02.982211 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.128945 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129689 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129720 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129738 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129926 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.129952 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.130240 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8t7j\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.130337 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.130487 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.130541 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232455 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232616 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232643 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232715 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232755 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232787 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232874 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8t7j\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232907 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232950 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.232993 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.234380 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.234545 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.234790 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.235357 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.236579 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.237061 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.242400 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.245943 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.252780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.261652 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8t7j\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.262517 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.287772 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.310264 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.317924 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.319770 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: W0130 22:00:03.323165 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16f23a16_7799_4e68_a4f9_0a392a20d0ee.slice/crio-90afa68a341d945cd89f0268b29de137866688adbd59ae5cf4c97137825f4118 WatchSource:0}: Error finding container 90afa68a341d945cd89f0268b29de137866688adbd59ae5cf4c97137825f4118: Status 404 returned error can't find the container with id 90afa68a341d945cd89f0268b29de137866688adbd59ae5cf4c97137825f4118 Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.323441 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.323655 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.323860 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.324113 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.324353 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pvjzf" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.325537 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.327155 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.365815 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.439366 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" event={"ID":"32529c55-774e-471a-8d6e-9ff5ba02c047","Type":"ContainerStarted","Data":"5fdca6155ec6e9cd50c6b13a2673eed1789482db7ff5c92cf6aabdd6bdf2e4cc"} Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.442792 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" event={"ID":"16f23a16-7799-4e68-a4f9-0a392a20d0ee","Type":"ContainerStarted","Data":"90afa68a341d945cd89f0268b29de137866688adbd59ae5cf4c97137825f4118"} Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443476 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443537 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443567 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443611 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443666 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443700 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443751 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443777 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443842 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443889 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qvl\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.443947 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546303 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546402 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546430 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546448 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546477 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546511 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546571 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qvl\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546598 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546618 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.546668 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.547649 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.547924 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.548300 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.570374 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.573780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.574446 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.579885 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.579937 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.582260 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qvl\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.583015 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.585255 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.597398 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.604352 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.665776 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.813633 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.953778 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume\") pod \"365cfffa-828e-4f0e-9903-4c1580e20c67\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.953857 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume\") pod \"365cfffa-828e-4f0e-9903-4c1580e20c67\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.954061 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qql6c\" (UniqueName: \"kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c\") pod \"365cfffa-828e-4f0e-9903-4c1580e20c67\" (UID: \"365cfffa-828e-4f0e-9903-4c1580e20c67\") " Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.955041 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume" (OuterVolumeSpecName: "config-volume") pod "365cfffa-828e-4f0e-9903-4c1580e20c67" (UID: "365cfffa-828e-4f0e-9903-4c1580e20c67"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.958502 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "365cfffa-828e-4f0e-9903-4c1580e20c67" (UID: "365cfffa-828e-4f0e-9903-4c1580e20c67"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4979]: I0130 22:00:03.963103 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c" (OuterVolumeSpecName: "kube-api-access-qql6c") pod "365cfffa-828e-4f0e-9903-4c1580e20c67" (UID: "365cfffa-828e-4f0e-9903-4c1580e20c67"). InnerVolumeSpecName "kube-api-access-qql6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.058121 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cfffa-828e-4f0e-9903-4c1580e20c67-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.058628 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cfffa-828e-4f0e-9903-4c1580e20c67-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.058639 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qql6c\" (UniqueName: \"kubernetes.io/projected/365cfffa-828e-4f0e-9903-4c1580e20c67-kube-api-access-qql6c\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.134107 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:00:04 crc kubenswrapper[4979]: W0130 22:00:04.150786 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode28a1e34_b97c_4090_adf8_fa3e2b766365.slice/crio-07a49cceb74489142f70c5e54b77a1260f27b6febbad8e29043ec778ce1e05b1 WatchSource:0}: Error finding container 07a49cceb74489142f70c5e54b77a1260f27b6febbad8e29043ec778ce1e05b1: Status 404 returned error can't find the container with id 07a49cceb74489142f70c5e54b77a1260f27b6febbad8e29043ec778ce1e05b1 Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.242081 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:00:04 crc kubenswrapper[4979]: W0130 22:00:04.251291 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod981f1fee_4d2a_4d80_bf38_80557b6c5033.slice/crio-b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff WatchSource:0}: Error finding container b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff: Status 404 returned error can't find the container with id b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.457527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerStarted","Data":"b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff"} Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.460120 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.460092 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4" event={"ID":"365cfffa-828e-4f0e-9903-4c1580e20c67","Type":"ContainerDied","Data":"4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48"} Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.460408 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a93ad8a04f1005faa8a0dc60664c1d9ce2d911143afa4c738b119ceb5660e48" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.462550 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerStarted","Data":"07a49cceb74489142f70c5e54b77a1260f27b6febbad8e29043ec778ce1e05b1"} Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.577659 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:00:04 crc kubenswrapper[4979]: E0130 22:00:04.578164 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365cfffa-828e-4f0e-9903-4c1580e20c67" containerName="collect-profiles" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.578181 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="365cfffa-828e-4f0e-9903-4c1580e20c67" containerName="collect-profiles" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.578351 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="365cfffa-828e-4f0e-9903-4c1580e20c67" containerName="collect-profiles" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.581607 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.586448 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4n4fr" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.586561 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.586723 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.586861 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.590979 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.595004 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666119 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666221 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666258 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666310 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666329 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666353 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666377 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.666397 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqqfg\" (UniqueName: \"kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774026 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774156 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqqfg\" (UniqueName: \"kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774388 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774547 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774661 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774717 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774761 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.774806 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.777507 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.777807 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.779568 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.781815 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.781952 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.798862 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.802833 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.812179 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqqfg\" (UniqueName: \"kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.882209 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " pod="openstack/openstack-galera-0" Jan 30 22:00:04 crc kubenswrapper[4979]: I0130 22:00:04.910817 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 22:00:05 crc kubenswrapper[4979]: I0130 22:00:05.283926 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:00:05 crc kubenswrapper[4979]: I0130 22:00:05.477476 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerStarted","Data":"78ea57414491f2323050c139427e26db676dbcbe77ee157ba12f1a06c2d26416"} Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.029482 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.032395 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.037386 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.037401 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.037554 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.039181 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wj9ck" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.063555 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100143 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100226 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100605 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100657 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100779 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100826 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wgl\" (UniqueName: \"kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.100861 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.101055 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204296 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204363 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204392 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204511 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204533 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204572 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6wgl\" (UniqueName: \"kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.204679 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.205419 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.206184 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.206721 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.206713 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.207459 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.218666 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.230315 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.233333 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6wgl\" (UniqueName: \"kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.247146 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.371847 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.372112 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.373202 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.377274 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6xhn8" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.378139 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.390657 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.425133 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.519067 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mk5r\" (UniqueName: \"kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.519155 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.519222 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.519244 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.519278 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.621574 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mk5r\" (UniqueName: \"kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.621659 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.621716 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.621740 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.621772 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.622887 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.624053 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.660962 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.661796 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.673893 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mk5r\" (UniqueName: \"kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r\") pod \"memcached-0\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " pod="openstack/memcached-0" Jan 30 22:00:06 crc kubenswrapper[4979]: I0130 22:00:06.716542 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 22:00:07 crc kubenswrapper[4979]: I0130 22:00:07.084584 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.330302 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.337476 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.348105 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.387513 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6b9fg" Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.506438 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbh6\" (UniqueName: \"kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6\") pod \"kube-state-metrics-0\" (UID: \"802f295d-d208-4750-ab9b-c3886cb30091\") " pod="openstack/kube-state-metrics-0" Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.609014 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntbh6\" (UniqueName: \"kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6\") pod \"kube-state-metrics-0\" (UID: \"802f295d-d208-4750-ab9b-c3886cb30091\") " pod="openstack/kube-state-metrics-0" Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.644355 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntbh6\" (UniqueName: \"kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6\") pod \"kube-state-metrics-0\" (UID: \"802f295d-d208-4750-ab9b-c3886cb30091\") " pod="openstack/kube-state-metrics-0" Jan 30 22:00:08 crc kubenswrapper[4979]: I0130 22:00:08.714107 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.339006 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.340407 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.347326 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.347407 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rp4wv" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.347615 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.354175 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.356204 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.362066 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.375774 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464167 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464239 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464461 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgffv\" (UniqueName: \"kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464547 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464593 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkgwm\" (UniqueName: \"kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464651 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464688 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464709 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464748 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464766 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464791 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.464840 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.566867 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567270 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567376 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567488 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgffv\" (UniqueName: \"kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567592 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567672 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkgwm\" (UniqueName: \"kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567769 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567869 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567971 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567903 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568149 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567978 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567711 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568109 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568277 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568306 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.567816 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568400 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568565 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.568640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.570113 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.571259 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.573351 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.573580 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.588791 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgffv\" (UniqueName: \"kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv\") pod \"ovn-controller-kxk8g\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.588976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkgwm\" (UniqueName: \"kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm\") pod \"ovn-controller-ovs-tmjt2\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.676891 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.697402 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.984256 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.986799 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.990261 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.990558 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.991283 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-n9pfk" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.991304 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 22:00:11 crc kubenswrapper[4979]: I0130 22:00:11.991476 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.003288 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178193 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178279 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178335 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178387 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178443 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178473 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178506 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.178542 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7hdz\" (UniqueName: \"kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280215 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280349 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280387 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280424 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7hdz\" (UniqueName: \"kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280474 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280526 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280576 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.280858 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.281022 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.281846 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.282674 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.290496 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.290750 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.293225 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.305627 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7hdz\" (UniqueName: \"kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.307802 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.565089 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerStarted","Data":"5c5282dd71d589822510ea8f2d38d385c993be6f5e42e4d1471904abd0c28e55"} Jan 30 22:00:12 crc kubenswrapper[4979]: I0130 22:00:12.609160 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.453791 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.458283 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.461080 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.461255 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-w5vzq" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.461217 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.461686 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.463141 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650212 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650283 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650323 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lsh6\" (UniqueName: \"kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650590 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650689 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.650801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.651008 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.651196 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753456 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753514 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753598 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753628 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lsh6\" (UniqueName: \"kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753678 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753698 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.753731 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.754174 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.754831 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.755083 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.755713 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.763439 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.768783 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.769428 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.772242 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lsh6\" (UniqueName: \"kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.781048 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:15 crc kubenswrapper[4979]: I0130 22:00:15.832175 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:28 crc kubenswrapper[4979]: E0130 22:00:28.147394 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 22:00:28 crc kubenswrapper[4979]: E0130 22:00:28.149438 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqqfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(6795c6d5-6bb8-432f-b7ca-f29f33298093): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:00:28 crc kubenswrapper[4979]: E0130 22:00:28.150780 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" Jan 30 22:00:28 crc kubenswrapper[4979]: E0130 22:00:28.726711 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" Jan 30 22:00:29 crc kubenswrapper[4979]: E0130 22:00:29.489154 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 22:00:29 crc kubenswrapper[4979]: E0130 22:00:29.489537 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7qvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(e28a1e34-b97c-4090-adf8-fa3e2b766365): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:00:29 crc kubenswrapper[4979]: E0130 22:00:29.490847 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" Jan 30 22:00:29 crc kubenswrapper[4979]: E0130 22:00:29.735084 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.284434 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.284671 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69j9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-2qnqn_openstack(32529c55-774e-471a-8d6e-9ff5ba02c047): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.286625 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" podUID="32529c55-774e-471a-8d6e-9ff5ba02c047" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.392277 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.392925 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4s8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-27kgx_openstack(f7326bcf-bcff-43db-a33c-04f37fbd0ad6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.394272 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" podUID="f7326bcf-bcff-43db-a33c-04f37fbd0ad6" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.455680 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.455959 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nnpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-hgv8v_openstack(8de8c822-a2be-442f-af66-2e2b1991b947): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:00:30 crc kubenswrapper[4979]: E0130 22:00:30.457096 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" podUID="8de8c822-a2be-442f-af66-2e2b1991b947" Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.742994 4979 generic.go:334] "Generic (PLEG): container finished" podID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerID="a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363" exitCode=0 Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.743391 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" event={"ID":"16f23a16-7799-4e68-a4f9-0a392a20d0ee","Type":"ContainerDied","Data":"a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363"} Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.750826 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerStarted","Data":"92e73fbaf6be7974b5e70d2a4a6be5d1621679737d38de600bb587583fc30031"} Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.916590 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.974325 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:00:30 crc kubenswrapper[4979]: I0130 22:00:30.980339 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 22:00:31 crc kubenswrapper[4979]: W0130 22:00:31.097155 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0b30c9_4972_4476_90e8_eec8d5d44ce5.slice/crio-96db0ca5fc664494edd55a8a9e353913c559045aaf6936b24c262a6f00efc265 WatchSource:0}: Error finding container 96db0ca5fc664494edd55a8a9e353913c559045aaf6936b24c262a6f00efc265: Status 404 returned error can't find the container with id 96db0ca5fc664494edd55a8a9e353913c559045aaf6936b24c262a6f00efc265 Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.101697 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.274284 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:00:31 crc kubenswrapper[4979]: E0130 22:00:31.374777 4979 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 22:00:31 crc kubenswrapper[4979]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/32529c55-774e-471a-8d6e-9ff5ba02c047/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 22:00:31 crc kubenswrapper[4979]: > podSandboxID="5fdca6155ec6e9cd50c6b13a2673eed1789482db7ff5c92cf6aabdd6bdf2e4cc" Jan 30 22:00:31 crc kubenswrapper[4979]: E0130 22:00:31.375026 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:00:31 crc kubenswrapper[4979]: init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69j9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-2qnqn_openstack(32529c55-774e-471a-8d6e-9ff5ba02c047): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/32529c55-774e-471a-8d6e-9ff5ba02c047/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 22:00:31 crc kubenswrapper[4979]: > logger="UnhandledError" Jan 30 22:00:31 crc kubenswrapper[4979]: E0130 22:00:31.376814 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/32529c55-774e-471a-8d6e-9ff5ba02c047/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" podUID="32529c55-774e-471a-8d6e-9ff5ba02c047" Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.392146 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.761064 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerStarted","Data":"1ba7eb4e73d21b76aae2c54799684c5d1a7e13a849894846bd2ade424074662c"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.762707 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerStarted","Data":"6de0f04b65ae33fad502fd47c75940202442c98e117caa698fb7adad6b0870b8"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.764374 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" event={"ID":"8de8c822-a2be-442f-af66-2e2b1991b947","Type":"ContainerDied","Data":"e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.764406 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7c271f9b80119af98bbde738800e5055c5495ef3ba3e1c2862a48b9000060eb" Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.766766 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d","Type":"ContainerStarted","Data":"bef9626e17c775699e3abae85cd19e88917b71194c8acdd56a70c42320faed2f"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.767811 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" event={"ID":"f7326bcf-bcff-43db-a33c-04f37fbd0ad6","Type":"ContainerDied","Data":"1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.767837 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b162861f81aba1e9041dcdb0992a3be89fec862f4c632e520adbd2c86190d6f" Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.769065 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"802f295d-d208-4750-ab9b-c3886cb30091","Type":"ContainerStarted","Data":"073da3757392885be51de106d5a842ae9944cc19e4dc0f6b4686c2786716c716"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.771073 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerStarted","Data":"af076ee56d5886e64a296e55b03b5bb0ded8de489a95899c61270dac099f1dfe"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.772650 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g" event={"ID":"5e0b30c9-4972-4476-90e8-eec8d5d44ce5","Type":"ContainerStarted","Data":"96db0ca5fc664494edd55a8a9e353913c559045aaf6936b24c262a6f00efc265"} Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.934367 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 22:00:31 crc kubenswrapper[4979]: I0130 22:00:31.945839 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.024304 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config\") pod \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.024532 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config\") pod \"8de8c822-a2be-442f-af66-2e2b1991b947\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.024606 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc\") pod \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.024716 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nnpz\" (UniqueName: \"kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz\") pod \"8de8c822-a2be-442f-af66-2e2b1991b947\" (UID: \"8de8c822-a2be-442f-af66-2e2b1991b947\") " Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.025375 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7326bcf-bcff-43db-a33c-04f37fbd0ad6" (UID: "f7326bcf-bcff-43db-a33c-04f37fbd0ad6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.025438 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config" (OuterVolumeSpecName: "config") pod "8de8c822-a2be-442f-af66-2e2b1991b947" (UID: "8de8c822-a2be-442f-af66-2e2b1991b947"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.025863 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config" (OuterVolumeSpecName: "config") pod "f7326bcf-bcff-43db-a33c-04f37fbd0ad6" (UID: "f7326bcf-bcff-43db-a33c-04f37fbd0ad6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.032567 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4s8n\" (UniqueName: \"kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n\") pod \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\" (UID: \"f7326bcf-bcff-43db-a33c-04f37fbd0ad6\") " Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.033689 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de8c822-a2be-442f-af66-2e2b1991b947-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.033706 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.033716 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.033963 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n" (OuterVolumeSpecName: "kube-api-access-m4s8n") pod "f7326bcf-bcff-43db-a33c-04f37fbd0ad6" (UID: "f7326bcf-bcff-43db-a33c-04f37fbd0ad6"). InnerVolumeSpecName "kube-api-access-m4s8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.040338 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz" (OuterVolumeSpecName: "kube-api-access-8nnpz") pod "8de8c822-a2be-442f-af66-2e2b1991b947" (UID: "8de8c822-a2be-442f-af66-2e2b1991b947"). InnerVolumeSpecName "kube-api-access-8nnpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.136676 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nnpz\" (UniqueName: \"kubernetes.io/projected/8de8c822-a2be-442f-af66-2e2b1991b947-kube-api-access-8nnpz\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.136719 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4s8n\" (UniqueName: \"kubernetes.io/projected/f7326bcf-bcff-43db-a33c-04f37fbd0ad6-kube-api-access-m4s8n\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.781605 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerStarted","Data":"936faae891dc0d6463f534c26667ac6f817885146529e96b4394369309b4bf52"} Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.784878 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" event={"ID":"16f23a16-7799-4e68-a4f9-0a392a20d0ee","Type":"ContainerStarted","Data":"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503"} Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.784933 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-27kgx" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.784966 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.784909 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hgv8v" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.848303 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" podStartSLOduration=3.770798627 podStartE2EDuration="30.848282199s" podCreationTimestamp="2026-01-30 22:00:02 +0000 UTC" firstStartedPulling="2026-01-30 22:00:03.32752626 +0000 UTC m=+1199.288773293" lastFinishedPulling="2026-01-30 22:00:30.405009822 +0000 UTC m=+1226.366256865" observedRunningTime="2026-01-30 22:00:32.842788071 +0000 UTC m=+1228.804035104" watchObservedRunningTime="2026-01-30 22:00:32.848282199 +0000 UTC m=+1228.809529232" Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.885245 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.888160 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hgv8v"] Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.923249 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 22:00:32 crc kubenswrapper[4979]: I0130 22:00:32.929066 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-27kgx"] Jan 30 22:00:33 crc kubenswrapper[4979]: I0130 22:00:33.080187 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de8c822-a2be-442f-af66-2e2b1991b947" path="/var/lib/kubelet/pods/8de8c822-a2be-442f-af66-2e2b1991b947/volumes" Jan 30 22:00:33 crc kubenswrapper[4979]: I0130 22:00:33.080682 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7326bcf-bcff-43db-a33c-04f37fbd0ad6" path="/var/lib/kubelet/pods/f7326bcf-bcff-43db-a33c-04f37fbd0ad6/volumes" Jan 30 22:00:36 crc kubenswrapper[4979]: I0130 22:00:36.839619 4979 generic.go:334] "Generic (PLEG): container finished" podID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerID="92e73fbaf6be7974b5e70d2a4a6be5d1621679737d38de600bb587583fc30031" exitCode=0 Jan 30 22:00:36 crc kubenswrapper[4979]: I0130 22:00:36.839727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerDied","Data":"92e73fbaf6be7974b5e70d2a4a6be5d1621679737d38de600bb587583fc30031"} Jan 30 22:00:37 crc kubenswrapper[4979]: I0130 22:00:37.566187 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:00:37 crc kubenswrapper[4979]: I0130 22:00:37.621205 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.081429 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.243586 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config\") pod \"32529c55-774e-471a-8d6e-9ff5ba02c047\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.243678 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc\") pod \"32529c55-774e-471a-8d6e-9ff5ba02c047\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.243725 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69j9n\" (UniqueName: \"kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n\") pod \"32529c55-774e-471a-8d6e-9ff5ba02c047\" (UID: \"32529c55-774e-471a-8d6e-9ff5ba02c047\") " Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.255506 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n" (OuterVolumeSpecName: "kube-api-access-69j9n") pod "32529c55-774e-471a-8d6e-9ff5ba02c047" (UID: "32529c55-774e-471a-8d6e-9ff5ba02c047"). InnerVolumeSpecName "kube-api-access-69j9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.276355 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32529c55-774e-471a-8d6e-9ff5ba02c047" (UID: "32529c55-774e-471a-8d6e-9ff5ba02c047"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.277529 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config" (OuterVolumeSpecName: "config") pod "32529c55-774e-471a-8d6e-9ff5ba02c047" (UID: "32529c55-774e-471a-8d6e-9ff5ba02c047"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.345642 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.345683 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69j9n\" (UniqueName: \"kubernetes.io/projected/32529c55-774e-471a-8d6e-9ff5ba02c047-kube-api-access-69j9n\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.345694 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32529c55-774e-471a-8d6e-9ff5ba02c047-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.865780 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerStarted","Data":"364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e"} Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.868615 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" event={"ID":"32529c55-774e-471a-8d6e-9ff5ba02c047","Type":"ContainerDied","Data":"5fdca6155ec6e9cd50c6b13a2673eed1789482db7ff5c92cf6aabdd6bdf2e4cc"} Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.868903 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-2qnqn" Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.928358 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:38 crc kubenswrapper[4979]: I0130 22:00:38.938791 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-2qnqn"] Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.080402 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32529c55-774e-471a-8d6e-9ff5ba02c047" path="/var/lib/kubelet/pods/32529c55-774e-471a-8d6e-9ff5ba02c047/volumes" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.882522 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerStarted","Data":"b8dd50aa90c7ce48431a68126a4e4bcee3261b44260cf48698bd70f7bf026dc4"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.886644 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d","Type":"ContainerStarted","Data":"11167d299d7103f588d853413dc7b7095145b87d82239c5f576cb6d82dbfce8a"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.886754 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.888588 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerID="515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd" exitCode=0 Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.888653 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerDied","Data":"515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.891815 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"802f295d-d208-4750-ab9b-c3886cb30091","Type":"ContainerStarted","Data":"20c28cbb64eeb54902f8d83f5e5ce1cb0b5f0534acb2d87e4d7c5f48e86998df"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.892212 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.894780 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g" event={"ID":"5e0b30c9-4972-4476-90e8-eec8d5d44ce5","Type":"ContainerStarted","Data":"2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.894911 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kxk8g" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.899634 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerStarted","Data":"e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d"} Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.909320 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.719469816 podStartE2EDuration="35.909301869s" podCreationTimestamp="2026-01-30 22:00:04 +0000 UTC" firstStartedPulling="2026-01-30 22:00:12.213556345 +0000 UTC m=+1208.174803388" lastFinishedPulling="2026-01-30 22:00:30.403388408 +0000 UTC m=+1226.364635441" observedRunningTime="2026-01-30 22:00:39.904617242 +0000 UTC m=+1235.865864275" watchObservedRunningTime="2026-01-30 22:00:39.909301869 +0000 UTC m=+1235.870548902" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.927575 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=23.709089034 podStartE2EDuration="31.92754754s" podCreationTimestamp="2026-01-30 22:00:08 +0000 UTC" firstStartedPulling="2026-01-30 22:00:30.903642756 +0000 UTC m=+1226.864889789" lastFinishedPulling="2026-01-30 22:00:39.122101262 +0000 UTC m=+1235.083348295" observedRunningTime="2026-01-30 22:00:39.922242417 +0000 UTC m=+1235.883489450" watchObservedRunningTime="2026-01-30 22:00:39.92754754 +0000 UTC m=+1235.888794573" Jan 30 22:00:39 crc kubenswrapper[4979]: I0130 22:00:39.970276 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.661802394 podStartE2EDuration="33.970250608s" podCreationTimestamp="2026-01-30 22:00:06 +0000 UTC" firstStartedPulling="2026-01-30 22:00:31.072944431 +0000 UTC m=+1227.034191464" lastFinishedPulling="2026-01-30 22:00:37.381392645 +0000 UTC m=+1233.342639678" observedRunningTime="2026-01-30 22:00:39.965720606 +0000 UTC m=+1235.926967639" watchObservedRunningTime="2026-01-30 22:00:39.970250608 +0000 UTC m=+1235.931497641" Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.911495 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerStarted","Data":"9e984fe191fbb0e089fea2d7c4a853d2ee59f390e44ae404701bd08fbd0e1844"} Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.913953 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerStarted","Data":"ff005d24d962eb84bd10a56b66ec88ce9be0ba0641162443a679b4594c534402"} Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.916715 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerStarted","Data":"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb"} Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.916749 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerStarted","Data":"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70"} Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.917129 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.932385 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.209070092 podStartE2EDuration="30.932353669s" podCreationTimestamp="2026-01-30 22:00:10 +0000 UTC" firstStartedPulling="2026-01-30 22:00:31.47293228 +0000 UTC m=+1227.434179313" lastFinishedPulling="2026-01-30 22:00:40.196215857 +0000 UTC m=+1236.157462890" observedRunningTime="2026-01-30 22:00:40.92791188 +0000 UTC m=+1236.889158913" watchObservedRunningTime="2026-01-30 22:00:40.932353669 +0000 UTC m=+1236.893600702" Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.934785 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kxk8g" podStartSLOduration=23.143787828 podStartE2EDuration="29.934768785s" podCreationTimestamp="2026-01-30 22:00:11 +0000 UTC" firstStartedPulling="2026-01-30 22:00:31.103378398 +0000 UTC m=+1227.064625421" lastFinishedPulling="2026-01-30 22:00:37.894359345 +0000 UTC m=+1233.855606378" observedRunningTime="2026-01-30 22:00:39.989842915 +0000 UTC m=+1235.951089978" watchObservedRunningTime="2026-01-30 22:00:40.934768785 +0000 UTC m=+1236.896015818" Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.955622 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.245349099 podStartE2EDuration="26.955592645s" podCreationTimestamp="2026-01-30 22:00:14 +0000 UTC" firstStartedPulling="2026-01-30 22:00:31.482194369 +0000 UTC m=+1227.443441422" lastFinishedPulling="2026-01-30 22:00:40.192437935 +0000 UTC m=+1236.153684968" observedRunningTime="2026-01-30 22:00:40.95242342 +0000 UTC m=+1236.913670463" watchObservedRunningTime="2026-01-30 22:00:40.955592645 +0000 UTC m=+1236.916839708" Jan 30 22:00:40 crc kubenswrapper[4979]: I0130 22:00:40.981411 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tmjt2" podStartSLOduration=23.974858476 podStartE2EDuration="29.981381928s" podCreationTimestamp="2026-01-30 22:00:11 +0000 UTC" firstStartedPulling="2026-01-30 22:00:31.374829571 +0000 UTC m=+1227.336076604" lastFinishedPulling="2026-01-30 22:00:37.381353023 +0000 UTC m=+1233.342600056" observedRunningTime="2026-01-30 22:00:40.981251065 +0000 UTC m=+1236.942498118" watchObservedRunningTime="2026-01-30 22:00:40.981381928 +0000 UTC m=+1236.942628951" Jan 30 22:00:41 crc kubenswrapper[4979]: I0130 22:00:41.698402 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.610509 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.612559 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.653391 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.832685 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.882523 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:42 crc kubenswrapper[4979]: E0130 22:00:42.886518 4979 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.143:39584->38.102.83.143:38353: write tcp 38.102.83.143:39584->38.102.83.143:38353: write: broken pipe Jan 30 22:00:42 crc kubenswrapper[4979]: I0130 22:00:42.932919 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:43 crc kubenswrapper[4979]: I0130 22:00:43.945106 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerStarted","Data":"c95e9571ab3d28e43a0c69cdf9503d7a855b5db4e2dc8986089e4c89a9a844d2"} Jan 30 22:00:43 crc kubenswrapper[4979]: I0130 22:00:43.998046 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.255709 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f55rb"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.264320 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.268486 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.291578 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f55rb"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.321195 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.322284 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.329427 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.346516 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371066 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371135 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371160 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsbsj\" (UniqueName: \"kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371192 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnm5g\" (UniqueName: \"kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371214 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371264 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371296 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371312 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371337 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.371371 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.473544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475215 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475265 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475297 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsbsj\" (UniqueName: \"kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475329 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnm5g\" (UniqueName: \"kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475355 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475436 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475487 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475511 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.475544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.477160 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.478794 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.479869 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.480277 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.480333 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.480446 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.482146 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.483321 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.503997 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f55rb"] Jan 30 22:00:44 crc kubenswrapper[4979]: E0130 22:00:44.505204 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-tnm5g], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" podUID="dbccd103-4e22-4fd6-a5ad-fc996b992328" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.510929 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsbsj\" (UniqueName: \"kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj\") pod \"ovn-controller-metrics-lz8zj\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.511596 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnm5g\" (UniqueName: \"kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g\") pod \"dnsmasq-dns-7f896c8c65-f55rb\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.527007 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.528542 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.533125 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.545201 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.578756 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.578843 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.578874 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.578987 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.579347 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.652930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.680799 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.680856 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.680881 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.680927 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.680993 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.681782 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.682102 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.682115 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.683226 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.766487 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc\") pod \"dnsmasq-dns-86db49b7ff-jjmrl\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.891825 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.953881 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:44 crc kubenswrapper[4979]: I0130 22:00:44.969149 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.086742 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config\") pod \"dbccd103-4e22-4fd6-a5ad-fc996b992328\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087028 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnm5g\" (UniqueName: \"kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g\") pod \"dbccd103-4e22-4fd6-a5ad-fc996b992328\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087115 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb\") pod \"dbccd103-4e22-4fd6-a5ad-fc996b992328\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087196 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc\") pod \"dbccd103-4e22-4fd6-a5ad-fc996b992328\" (UID: \"dbccd103-4e22-4fd6-a5ad-fc996b992328\") " Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087482 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config" (OuterVolumeSpecName: "config") pod "dbccd103-4e22-4fd6-a5ad-fc996b992328" (UID: "dbccd103-4e22-4fd6-a5ad-fc996b992328"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087827 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dbccd103-4e22-4fd6-a5ad-fc996b992328" (UID: "dbccd103-4e22-4fd6-a5ad-fc996b992328"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.087854 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbccd103-4e22-4fd6-a5ad-fc996b992328" (UID: "dbccd103-4e22-4fd6-a5ad-fc996b992328"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.088065 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.088092 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.097175 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g" (OuterVolumeSpecName: "kube-api-access-tnm5g") pod "dbccd103-4e22-4fd6-a5ad-fc996b992328" (UID: "dbccd103-4e22-4fd6-a5ad-fc996b992328"). InnerVolumeSpecName "kube-api-access-tnm5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.190533 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbccd103-4e22-4fd6-a5ad-fc996b992328-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.190574 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnm5g\" (UniqueName: \"kubernetes.io/projected/dbccd103-4e22-4fd6-a5ad-fc996b992328-kube-api-access-tnm5g\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.242098 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.567305 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:00:45 crc kubenswrapper[4979]: W0130 22:00:45.576403 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b4df8c_21a1_4acb_b209_643ded266729.slice/crio-4b609c2b6d08a9d1b5ea4a1a01b4dc95a89b4a0ad9e2c562b11f37f1ad0a8fff WatchSource:0}: Error finding container 4b609c2b6d08a9d1b5ea4a1a01b4dc95a89b4a0ad9e2c562b11f37f1ad0a8fff: Status 404 returned error can't find the container with id 4b609c2b6d08a9d1b5ea4a1a01b4dc95a89b4a0ad9e2c562b11f37f1ad0a8fff Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.972337 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" event={"ID":"34b4df8c-21a1-4acb-b209-643ded266729","Type":"ContainerStarted","Data":"4b609c2b6d08a9d1b5ea4a1a01b4dc95a89b4a0ad9e2c562b11f37f1ad0a8fff"} Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.974372 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f55rb" Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.977411 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lz8zj" event={"ID":"817d8847-f022-4837-834f-a0e4b124f7ea","Type":"ContainerStarted","Data":"bb8bcac19d63070cb472f5498c791e719cc957cf60e16d8441a9b6a9f88dbeff"} Jan 30 22:00:45 crc kubenswrapper[4979]: I0130 22:00:45.977500 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lz8zj" event={"ID":"817d8847-f022-4837-834f-a0e4b124f7ea","Type":"ContainerStarted","Data":"4155908da65ed980762b6600d6cd531e31e34d1e8a5cf0688a19ba647961bebc"} Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.003320 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-lz8zj" podStartSLOduration=2.003300784 podStartE2EDuration="2.003300784s" podCreationTimestamp="2026-01-30 22:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:00:45.9975533 +0000 UTC m=+1241.958800333" watchObservedRunningTime="2026-01-30 22:00:46.003300784 +0000 UTC m=+1241.964547807" Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.055887 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f55rb"] Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.061674 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f55rb"] Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.372983 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.375853 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.506769 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.718213 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.986613 4979 generic.go:334] "Generic (PLEG): container finished" podID="34b4df8c-21a1-4acb-b209-643ded266729" containerID="38f94d44f88fb380f22ffff6e87982c6b5afa5689e2945b28203090cec0d6de2" exitCode=0 Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.986729 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" event={"ID":"34b4df8c-21a1-4acb-b209-643ded266729","Type":"ContainerDied","Data":"38f94d44f88fb380f22ffff6e87982c6b5afa5689e2945b28203090cec0d6de2"} Jan 30 22:00:46 crc kubenswrapper[4979]: I0130 22:00:46.989528 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerStarted","Data":"d23312f80a962608adf95395e957ee6134bf402e8fc2a1db6e478f01ef1ed902"} Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.086016 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbccd103-4e22-4fd6-a5ad-fc996b992328" path="/var/lib/kubelet/pods/dbccd103-4e22-4fd6-a5ad-fc996b992328/volumes" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.108521 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.658156 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.872006 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.875053 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.879580 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.879626 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.879881 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.879968 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xt5jr" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.915924 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955254 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955311 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955342 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955376 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955514 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955566 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dn9c\" (UniqueName: \"kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:47 crc kubenswrapper[4979]: I0130 22:00:47.955816 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.008914 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" event={"ID":"34b4df8c-21a1-4acb-b209-643ded266729","Type":"ContainerStarted","Data":"77fc722e9bec3fafe53167de52ee1127b71951d1b0c68ed26631637e0cb42c5e"} Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.009730 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.031704 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" podStartSLOduration=4.031686681 podStartE2EDuration="4.031686681s" podCreationTimestamp="2026-01-30 22:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:00:48.029714087 +0000 UTC m=+1243.990961120" watchObservedRunningTime="2026-01-30 22:00:48.031686681 +0000 UTC m=+1243.992933714" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.057841 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058025 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058191 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058247 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058284 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.058391 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dn9c\" (UniqueName: \"kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.059332 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.059709 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.060652 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.067327 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.165883 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.166951 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.167245 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dn9c\" (UniqueName: \"kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c\") pod \"ovn-northd-0\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.207746 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.738473 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.781533 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.838080 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.839864 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.910125 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.919516 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.980653 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.983644 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.983959 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdnk\" (UniqueName: \"kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.984084 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:48 crc kubenswrapper[4979]: I0130 22:00:48.984200 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.031136 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerStarted","Data":"2bd740bd191cb301e1ace5a3abcf92c5ccb570c941fcbb8171a41eb9fdac51bb"} Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.086227 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.086394 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdnk\" (UniqueName: \"kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.086419 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.086471 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.086537 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.087434 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.088246 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.092410 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.095775 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.122078 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdnk\" (UniqueName: \"kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk\") pod \"dnsmasq-dns-698758b865-t86qb\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.211534 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.755751 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.969408 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.976312 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.980052 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.980237 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.980328 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.980546 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-c8jn4" Jan 30 22:00:49 crc kubenswrapper[4979]: I0130 22:00:49.988628 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.042730 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t86qb" event={"ID":"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4","Type":"ContainerStarted","Data":"9e5bb560297f4e0e8f2115f8c48331514e53ce9d31d3b53377b9d219de77d2e7"} Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.043335 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" containerID="cri-o://77fc722e9bec3fafe53167de52ee1127b71951d1b0c68ed26631637e0cb42c5e" gracePeriod=10 Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110615 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110746 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110851 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110876 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28trk\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110922 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.110957 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.213761 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.213861 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.213962 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.213981 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28trk\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.214047 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.214045 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.214288 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.214349 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:00:50.714330586 +0000 UTC m=+1246.675577619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.214500 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.214080 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.214934 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.215208 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.220192 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.244403 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.248121 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28trk\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.474862 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-t6khq"] Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.476407 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.479410 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.479846 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.480640 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.492895 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-t6khq"] Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.542253 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-t6khq"] Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.543426 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-tzk4z ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-t6khq" podUID="a06ee3da-092d-42ff-a8f5-b06a6e9022a7" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.559366 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qf69d"] Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.561116 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.566907 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qf69d"] Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.623279 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.623657 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.623817 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.624299 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzk4z\" (UniqueName: \"kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.624506 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.624582 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.624827 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727522 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727597 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727628 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727672 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4d6z\" (UniqueName: \"kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727732 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzk4z\" (UniqueName: \"kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727761 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.727796 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.727826 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.727830 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: E0130 22:00:50.727990 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:00:51.727970554 +0000 UTC m=+1247.689217587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728299 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728446 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728561 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728729 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728799 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.728899 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.729136 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.729222 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.729313 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.729802 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.730193 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.733613 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.733851 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.747477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.751928 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzk4z\" (UniqueName: \"kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z\") pod \"swift-ring-rebalance-t6khq\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831357 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831451 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831523 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831630 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831683 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831722 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4d6z\" (UniqueName: \"kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.831777 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.832004 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.832580 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.832813 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.839567 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.841547 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.842069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.862204 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4d6z\" (UniqueName: \"kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z\") pod \"swift-ring-rebalance-qf69d\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:50 crc kubenswrapper[4979]: I0130 22:00:50.879317 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.055675 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.124318 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238289 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238349 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238414 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238487 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzk4z\" (UniqueName: \"kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238651 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238694 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.238741 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices\") pod \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\" (UID: \"a06ee3da-092d-42ff-a8f5-b06a6e9022a7\") " Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.240264 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.240697 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts" (OuterVolumeSpecName: "scripts") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.240913 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.243782 4979 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.243856 4979 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.243874 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.246172 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z" (OuterVolumeSpecName: "kube-api-access-tzk4z") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "kube-api-access-tzk4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.265898 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.267277 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.267431 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a06ee3da-092d-42ff-a8f5-b06a6e9022a7" (UID: "a06ee3da-092d-42ff-a8f5-b06a6e9022a7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.287102 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qf69d"] Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.345557 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.345631 4979 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.345642 4979 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.345653 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzk4z\" (UniqueName: \"kubernetes.io/projected/a06ee3da-092d-42ff-a8f5-b06a6e9022a7-kube-api-access-tzk4z\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:51 crc kubenswrapper[4979]: I0130 22:00:51.752717 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:51 crc kubenswrapper[4979]: E0130 22:00:51.753350 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:00:51 crc kubenswrapper[4979]: E0130 22:00:51.753367 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:00:51 crc kubenswrapper[4979]: E0130 22:00:51.753418 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:00:53.753402379 +0000 UTC m=+1249.714649412 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:00:52 crc kubenswrapper[4979]: I0130 22:00:52.806746 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t6khq" Jan 30 22:00:52 crc kubenswrapper[4979]: I0130 22:00:52.807580 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qf69d" event={"ID":"29c6531f-d97f-4f39-95bd-4c2b8a75779f","Type":"ContainerStarted","Data":"1cded23ff5ee2d2e3497c55f604788871e1bcd1e4e1acb05a7084523b596fe7e"} Jan 30 22:00:52 crc kubenswrapper[4979]: I0130 22:00:52.887993 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-t6khq"] Jan 30 22:00:52 crc kubenswrapper[4979]: I0130 22:00:52.894842 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-t6khq"] Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.080310 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a06ee3da-092d-42ff-a8f5-b06a6e9022a7" path="/var/lib/kubelet/pods/a06ee3da-092d-42ff-a8f5-b06a6e9022a7/volumes" Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.804058 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:53 crc kubenswrapper[4979]: E0130 22:00:53.804366 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:00:53 crc kubenswrapper[4979]: E0130 22:00:53.804635 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:00:53 crc kubenswrapper[4979]: E0130 22:00:53.804748 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:00:57.804723402 +0000 UTC m=+1253.765970425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.820980 4979 generic.go:334] "Generic (PLEG): container finished" podID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerID="c95e9571ab3d28e43a0c69cdf9503d7a855b5db4e2dc8986089e4c89a9a844d2" exitCode=0 Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.821304 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerDied","Data":"c95e9571ab3d28e43a0c69cdf9503d7a855b5db4e2dc8986089e4c89a9a844d2"} Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.826859 4979 generic.go:334] "Generic (PLEG): container finished" podID="34b4df8c-21a1-4acb-b209-643ded266729" containerID="77fc722e9bec3fafe53167de52ee1127b71951d1b0c68ed26631637e0cb42c5e" exitCode=0 Jan 30 22:00:53 crc kubenswrapper[4979]: I0130 22:00:53.826909 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" event={"ID":"34b4df8c-21a1-4acb-b209-643ded266729","Type":"ContainerDied","Data":"77fc722e9bec3fafe53167de52ee1127b71951d1b0c68ed26631637e0cb42c5e"} Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.117298 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6g89l"] Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.124870 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.131572 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.137119 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6g89l"] Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.240883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.241134 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2dmq\" (UniqueName: \"kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.342997 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.343153 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2dmq\" (UniqueName: \"kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.344079 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.363685 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2dmq\" (UniqueName: \"kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq\") pod \"root-account-create-update-6g89l\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.452337 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6g89l" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.863344 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerStarted","Data":"62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962"} Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.894131 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371983.96067 podStartE2EDuration="52.894106108s" podCreationTimestamp="2026-01-30 22:00:03 +0000 UTC" firstStartedPulling="2026-01-30 22:00:05.305997284 +0000 UTC m=+1201.267244317" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:00:55.890647316 +0000 UTC m=+1251.851894389" watchObservedRunningTime="2026-01-30 22:00:55.894106108 +0000 UTC m=+1251.855353141" Jan 30 22:00:55 crc kubenswrapper[4979]: I0130 22:00:55.966152 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6g89l"] Jan 30 22:00:55 crc kubenswrapper[4979]: W0130 22:00:55.967847 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ee1e511_fa3d_4c0f_b03b_c0608b253006.slice/crio-c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642 WatchSource:0}: Error finding container c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642: Status 404 returned error can't find the container with id c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642 Jan 30 22:00:56 crc kubenswrapper[4979]: I0130 22:00:56.875747 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6g89l" event={"ID":"8ee1e511-fa3d-4c0f-b03b-c0608b253006","Type":"ContainerStarted","Data":"c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642"} Jan 30 22:00:57 crc kubenswrapper[4979]: I0130 22:00:57.806626 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:00:57 crc kubenswrapper[4979]: E0130 22:00:57.806842 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:00:57 crc kubenswrapper[4979]: E0130 22:00:57.806865 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:00:57 crc kubenswrapper[4979]: E0130 22:00:57.806950 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:01:05.806931865 +0000 UTC m=+1261.768178908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:00:59 crc kubenswrapper[4979]: I0130 22:00:59.894065 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Jan 30 22:01:00 crc kubenswrapper[4979]: I0130 22:01:00.930800 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6g89l" event={"ID":"8ee1e511-fa3d-4c0f-b03b-c0608b253006","Type":"ContainerStarted","Data":"92d1caa7eb5e4a30383396fbbceaf2e0ce7b7c37d00ab11c4913c35b85a605cb"} Jan 30 22:01:03 crc kubenswrapper[4979]: I0130 22:01:03.991212 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-6g89l" podStartSLOduration=8.991178188 podStartE2EDuration="8.991178188s" podCreationTimestamp="2026-01-30 22:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:03.988595938 +0000 UTC m=+1259.949843031" watchObservedRunningTime="2026-01-30 22:01:03.991178188 +0000 UTC m=+1259.952425221" Jan 30 22:01:04 crc kubenswrapper[4979]: I0130 22:01:04.896213 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Jan 30 22:01:04 crc kubenswrapper[4979]: I0130 22:01:04.911294 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 22:01:04 crc kubenswrapper[4979]: I0130 22:01:04.911398 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 22:01:04 crc kubenswrapper[4979]: I0130 22:01:04.985546 4979 generic.go:334] "Generic (PLEG): container finished" podID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerID="936faae891dc0d6463f534c26667ac6f817885146529e96b4394369309b4bf52" exitCode=0 Jan 30 22:01:04 crc kubenswrapper[4979]: I0130 22:01:04.985611 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerDied","Data":"936faae891dc0d6463f534c26667ac6f817885146529e96b4394369309b4bf52"} Jan 30 22:01:05 crc kubenswrapper[4979]: I0130 22:01:05.903334 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:01:05 crc kubenswrapper[4979]: E0130 22:01:05.903663 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:01:05 crc kubenswrapper[4979]: E0130 22:01:05.904257 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:01:05 crc kubenswrapper[4979]: E0130 22:01:05.904353 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:01:21.904323904 +0000 UTC m=+1277.865570937 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:01:07 crc kubenswrapper[4979]: E0130 22:01:07.341750 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1776585578/1\": happened during read: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified" Jan 30 22:01:07 crc kubenswrapper[4979]: E0130 22:01:07.342624 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n694h687h59fh57h659h5fdh5f4h647h575h596h547h55ch666h565h577h57bh87h65ch67dh5f5h586h5c6h67ch5f9h697h595hb9h5b6h5b7h5cbh68fh664q,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:n5b6h645h58h5c9h5bbh676h676h68ch64bh95h587h699hddh647h585h8chd6h5f7h549h5cfh59bhddh657h6bhb8h699h8bh545h5fch8fh656hf9q,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:n5bch84h686h57dh5c4h5c4h555h5c5h58ch68dh74h59h558h5dfh549h8h8fh644h5ddh688h79h658h68bh668h669hbfh555h5d5hf8h5c9h8dh58dq,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:nd9h65fh6fh56h565h64ch78h7ch59dh67dh555h96h688h674h594hbdh5f4h65bh5cfh55hc6hc6hf8h65h58fh67fhc8h9ch586h66ch54dhcbq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dn9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(e7cc7cf6-3592-4e25-9578-27ae56d6909b): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1776585578/1\": happened during read: context canceled" logger="UnhandledError" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.444646 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.538240 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb\") pod \"34b4df8c-21a1-4acb-b209-643ded266729\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.538327 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc\") pod \"34b4df8c-21a1-4acb-b209-643ded266729\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.538383 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb\") pod \"34b4df8c-21a1-4acb-b209-643ded266729\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.538468 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config\") pod \"34b4df8c-21a1-4acb-b209-643ded266729\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.538509 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc\") pod \"34b4df8c-21a1-4acb-b209-643ded266729\" (UID: \"34b4df8c-21a1-4acb-b209-643ded266729\") " Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.544551 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc" (OuterVolumeSpecName: "kube-api-access-jcvtc") pod "34b4df8c-21a1-4acb-b209-643ded266729" (UID: "34b4df8c-21a1-4acb-b209-643ded266729"). InnerVolumeSpecName "kube-api-access-jcvtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.578506 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "34b4df8c-21a1-4acb-b209-643ded266729" (UID: "34b4df8c-21a1-4acb-b209-643ded266729"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.579743 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config" (OuterVolumeSpecName: "config") pod "34b4df8c-21a1-4acb-b209-643ded266729" (UID: "34b4df8c-21a1-4acb-b209-643ded266729"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.580213 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34b4df8c-21a1-4acb-b209-643ded266729" (UID: "34b4df8c-21a1-4acb-b209-643ded266729"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.580731 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "34b4df8c-21a1-4acb-b209-643ded266729" (UID: "34b4df8c-21a1-4acb-b209-643ded266729"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.641208 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.641248 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.641264 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.641276 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b4df8c-21a1-4acb-b209-643ded266729-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:07 crc kubenswrapper[4979]: I0130 22:01:07.641288 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/34b4df8c-21a1-4acb-b209-643ded266729-kube-api-access-jcvtc\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.020656 4979 generic.go:334] "Generic (PLEG): container finished" podID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerID="34481ae8a2678ceccfab661611d1800a7d06957c7a2f8615105c54e98d7da90e" exitCode=0 Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.020764 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t86qb" event={"ID":"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4","Type":"ContainerDied","Data":"34481ae8a2678ceccfab661611d1800a7d06957c7a2f8615105c54e98d7da90e"} Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.024289 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" event={"ID":"34b4df8c-21a1-4acb-b209-643ded266729","Type":"ContainerDied","Data":"4b609c2b6d08a9d1b5ea4a1a01b4dc95a89b4a0ad9e2c562b11f37f1ad0a8fff"} Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.024378 4979 scope.go:117] "RemoveContainer" containerID="77fc722e9bec3fafe53167de52ee1127b71951d1b0c68ed26631637e0cb42c5e" Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.024308 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.026742 4979 generic.go:334] "Generic (PLEG): container finished" podID="8ee1e511-fa3d-4c0f-b03b-c0608b253006" containerID="92d1caa7eb5e4a30383396fbbceaf2e0ce7b7c37d00ab11c4913c35b85a605cb" exitCode=0 Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.026823 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6g89l" event={"ID":"8ee1e511-fa3d-4c0f-b03b-c0608b253006","Type":"ContainerDied","Data":"92d1caa7eb5e4a30383396fbbceaf2e0ce7b7c37d00ab11c4913c35b85a605cb"} Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.088195 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.098855 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jjmrl"] Jan 30 22:01:08 crc kubenswrapper[4979]: E0130 22:01:08.950862 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" Jan 30 22:01:08 crc kubenswrapper[4979]: E0130 22:01:08.951131 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:swift-ring-rebalance,Image:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,Command:[/usr/local/bin/swift-ring-tool all],Args:[],WorkingDir:/etc/swift,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CM_NAME,Value:swift-ring-files,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:openstack,ValueFrom:nil,},EnvVar{Name:OWNER_APIVERSION,Value:swift.openstack.org/v1beta1,ValueFrom:nil,},EnvVar{Name:OWNER_KIND,Value:SwiftRing,ValueFrom:nil,},EnvVar{Name:OWNER_NAME,Value:swift-ring,ValueFrom:nil,},EnvVar{Name:OWNER_UID,Value:0c9dfe78-7cd7-434e-8308-095f6953ebb6,ValueFrom:nil,},EnvVar{Name:SWIFT_MIN_PART_HOURS,Value:1,ValueFrom:nil,},EnvVar{Name:SWIFT_PART_POWER,Value:10,ValueFrom:nil,},EnvVar{Name:SWIFT_REPLICAS,Value:1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/swift-ring-tool,SubPath:swift-ring-tool,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:swiftconf,ReadOnly:true,MountPath:/etc/swift/swift.conf,SubPath:swift.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ring-data-devices,ReadOnly:true,MountPath:/var/lib/config-data/ring-devices,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dispersionconf,ReadOnly:true,MountPath:/etc/swift/dispersion.conf,SubPath:dispersion.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4d6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-ring-rebalance-qf69d_openstack(29c6531f-d97f-4f39-95bd-4c2b8a75779f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:01:08 crc kubenswrapper[4979]: I0130 22:01:08.953155 4979 scope.go:117] "RemoveContainer" containerID="38f94d44f88fb380f22ffff6e87982c6b5afa5689e2945b28203090cec0d6de2" Jan 30 22:01:08 crc kubenswrapper[4979]: E0130 22:01:08.953149 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/swift-ring-rebalance-qf69d" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" Jan 30 22:01:09 crc kubenswrapper[4979]: E0130 22:01:09.045424 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified\\\"\"" pod="openstack/swift-ring-rebalance-qf69d" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.090308 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b4df8c-21a1-4acb-b209-643ded266729" path="/var/lib/kubelet/pods/34b4df8c-21a1-4acb-b209-643ded266729/volumes" Jan 30 22:01:09 crc kubenswrapper[4979]: E0130 22:01:09.388501 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1776585578/1\\\": happened during read: context canceled\"" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.427634 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6g89l" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.599625 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2dmq\" (UniqueName: \"kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq\") pod \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.599907 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts\") pod \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\" (UID: \"8ee1e511-fa3d-4c0f-b03b-c0608b253006\") " Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.600998 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ee1e511-fa3d-4c0f-b03b-c0608b253006" (UID: "8ee1e511-fa3d-4c0f-b03b-c0608b253006"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.606327 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq" (OuterVolumeSpecName: "kube-api-access-m2dmq") pod "8ee1e511-fa3d-4c0f-b03b-c0608b253006" (UID: "8ee1e511-fa3d-4c0f-b03b-c0608b253006"). InnerVolumeSpecName "kube-api-access-m2dmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.702821 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ee1e511-fa3d-4c0f-b03b-c0608b253006-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.703333 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2dmq\" (UniqueName: \"kubernetes.io/projected/8ee1e511-fa3d-4c0f-b03b-c0608b253006-kube-api-access-m2dmq\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:09 crc kubenswrapper[4979]: I0130 22:01:09.896747 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-jjmrl" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.055314 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerStarted","Data":"32737030f36aec701cd5a18ee26db33f1920b61eff0e7b5c5143eb68b64ad2a2"} Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.055733 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.062545 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerStarted","Data":"80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1"} Jan 30 22:01:10 crc kubenswrapper[4979]: E0130 22:01:10.065429 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.068756 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6g89l" event={"ID":"8ee1e511-fa3d-4c0f-b03b-c0608b253006","Type":"ContainerDied","Data":"c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642"} Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.068797 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6g89l" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.068813 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a1713d8dba56bc8db700d2660c8bf6fc76e708d95ad158198b8242924e0642" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.078508 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t86qb" event={"ID":"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4","Type":"ContainerStarted","Data":"11b12b8a1042240e01cbd94aefdd223922da5bf565812f8e936ee2b92328c29b"} Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.080021 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.115280 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.003202002 podStartE2EDuration="1m9.115253852s" podCreationTimestamp="2026-01-30 22:00:01 +0000 UTC" firstStartedPulling="2026-01-30 22:00:04.254484537 +0000 UTC m=+1200.215731570" lastFinishedPulling="2026-01-30 22:00:30.366536377 +0000 UTC m=+1226.327783420" observedRunningTime="2026-01-30 22:01:10.087911107 +0000 UTC m=+1266.049158200" watchObservedRunningTime="2026-01-30 22:01:10.115253852 +0000 UTC m=+1266.076500905" Jan 30 22:01:10 crc kubenswrapper[4979]: I0130 22:01:10.137561 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-t86qb" podStartSLOduration=22.137534031 podStartE2EDuration="22.137534031s" podCreationTimestamp="2026-01-30 22:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:10.135655061 +0000 UTC m=+1266.096902104" watchObservedRunningTime="2026-01-30 22:01:10.137534031 +0000 UTC m=+1266.098781064" Jan 30 22:01:11 crc kubenswrapper[4979]: I0130 22:01:11.012278 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 22:01:11 crc kubenswrapper[4979]: E0130 22:01:11.089253 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" Jan 30 22:01:11 crc kubenswrapper[4979]: I0130 22:01:11.103348 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 22:01:11 crc kubenswrapper[4979]: I0130 22:01:11.723237 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kxk8g" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" probeResult="failure" output=< Jan 30 22:01:11 crc kubenswrapper[4979]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 22:01:11 crc kubenswrapper[4979]: > Jan 30 22:01:11 crc kubenswrapper[4979]: I0130 22:01:11.753130 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:01:11 crc kubenswrapper[4979]: I0130 22:01:11.754564 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112089 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kxk8g-config-vrffk"] Jan 30 22:01:12 crc kubenswrapper[4979]: E0130 22:01:12.112642 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="init" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112659 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="init" Jan 30 22:01:12 crc kubenswrapper[4979]: E0130 22:01:12.112679 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee1e511-fa3d-4c0f-b03b-c0608b253006" containerName="mariadb-account-create-update" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112690 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee1e511-fa3d-4c0f-b03b-c0608b253006" containerName="mariadb-account-create-update" Jan 30 22:01:12 crc kubenswrapper[4979]: E0130 22:01:12.112708 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112717 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112938 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b4df8c-21a1-4acb-b209-643ded266729" containerName="dnsmasq-dns" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.112954 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee1e511-fa3d-4c0f-b03b-c0608b253006" containerName="mariadb-account-create-update" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.115758 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.118395 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.124332 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxk8g-config-vrffk"] Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259444 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259543 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259574 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259706 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgj4\" (UniqueName: \"kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259825 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.259961 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.361891 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.361980 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdgj4\" (UniqueName: \"kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362021 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362082 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362217 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362273 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362420 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362411 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.362476 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.363178 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.364398 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.389863 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdgj4\" (UniqueName: \"kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4\") pod \"ovn-controller-kxk8g-config-vrffk\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.436856 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:12 crc kubenswrapper[4979]: I0130 22:01:12.832687 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxk8g-config-vrffk"] Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.103640 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g-config-vrffk" event={"ID":"4d465425-7b56-4a09-8c9f-91888b8097f9","Type":"ContainerStarted","Data":"3cff4c21528190bac2f5805403dd35c95c1b670810f5a9a916e00292b42d081e"} Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.546188 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6g89l"] Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.554593 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6g89l"] Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.602088 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hplgk"] Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.603479 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.611221 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hplgk"] Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.612254 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.692099 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5kp\" (UniqueName: \"kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.692178 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.794153 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.795078 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn5kp\" (UniqueName: \"kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.796868 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.820398 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn5kp\" (UniqueName: \"kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp\") pod \"root-account-create-update-hplgk\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:13 crc kubenswrapper[4979]: I0130 22:01:13.919478 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.127107 4979 generic.go:334] "Generic (PLEG): container finished" podID="4d465425-7b56-4a09-8c9f-91888b8097f9" containerID="80e1c8de2f5d2def08241e9e838d6caa9d9317d6bfc0e4390d83af93615634c1" exitCode=0 Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.127194 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g-config-vrffk" event={"ID":"4d465425-7b56-4a09-8c9f-91888b8097f9","Type":"ContainerDied","Data":"80e1c8de2f5d2def08241e9e838d6caa9d9317d6bfc0e4390d83af93615634c1"} Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.217194 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.284279 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.284553 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="dnsmasq-dns" containerID="cri-o://33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503" gracePeriod=10 Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.443310 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hplgk"] Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.756279 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.817730 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75t5q\" (UniqueName: \"kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q\") pod \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.817854 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc\") pod \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.817949 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config\") pod \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\" (UID: \"16f23a16-7799-4e68-a4f9-0a392a20d0ee\") " Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.827332 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q" (OuterVolumeSpecName: "kube-api-access-75t5q") pod "16f23a16-7799-4e68-a4f9-0a392a20d0ee" (UID: "16f23a16-7799-4e68-a4f9-0a392a20d0ee"). InnerVolumeSpecName "kube-api-access-75t5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.871047 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config" (OuterVolumeSpecName: "config") pod "16f23a16-7799-4e68-a4f9-0a392a20d0ee" (UID: "16f23a16-7799-4e68-a4f9-0a392a20d0ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.874587 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16f23a16-7799-4e68-a4f9-0a392a20d0ee" (UID: "16f23a16-7799-4e68-a4f9-0a392a20d0ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.924737 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75t5q\" (UniqueName: \"kubernetes.io/projected/16f23a16-7799-4e68-a4f9-0a392a20d0ee-kube-api-access-75t5q\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.925027 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:14 crc kubenswrapper[4979]: I0130 22:01:14.925163 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f23a16-7799-4e68-a4f9-0a392a20d0ee-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.082640 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ee1e511-fa3d-4c0f-b03b-c0608b253006" path="/var/lib/kubelet/pods/8ee1e511-fa3d-4c0f-b03b-c0608b253006/volumes" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.140667 4979 generic.go:334] "Generic (PLEG): container finished" podID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerID="33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503" exitCode=0 Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.140752 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.140770 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" event={"ID":"16f23a16-7799-4e68-a4f9-0a392a20d0ee","Type":"ContainerDied","Data":"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503"} Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.140831 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvmv4" event={"ID":"16f23a16-7799-4e68-a4f9-0a392a20d0ee","Type":"ContainerDied","Data":"90afa68a341d945cd89f0268b29de137866688adbd59ae5cf4c97137825f4118"} Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.140868 4979 scope.go:117] "RemoveContainer" containerID="33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.144552 4979 generic.go:334] "Generic (PLEG): container finished" podID="6bd0719b-952d-4080-a685-ce90c1c3bf93" containerID="0d4dc8128d54521f9ca5effeeca0076315899d8799e67ef62bddd57c385893e0" exitCode=0 Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.144846 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hplgk" event={"ID":"6bd0719b-952d-4080-a685-ce90c1c3bf93","Type":"ContainerDied","Data":"0d4dc8128d54521f9ca5effeeca0076315899d8799e67ef62bddd57c385893e0"} Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.144896 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hplgk" event={"ID":"6bd0719b-952d-4080-a685-ce90c1c3bf93","Type":"ContainerStarted","Data":"a0fc3aa14643ab8338851ee1a2c5bec0bc555e85843e53791bb00ed3c540ea43"} Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.185858 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.187324 4979 scope.go:117] "RemoveContainer" containerID="a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.197361 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvmv4"] Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.226203 4979 scope.go:117] "RemoveContainer" containerID="33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503" Jan 30 22:01:15 crc kubenswrapper[4979]: E0130 22:01:15.227010 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503\": container with ID starting with 33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503 not found: ID does not exist" containerID="33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.227100 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503"} err="failed to get container status \"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503\": rpc error: code = NotFound desc = could not find container \"33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503\": container with ID starting with 33f9b19b6f270b35a5f1f8aecd95105ae6d38cbb753576ac334c2098f6cc2503 not found: ID does not exist" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.227176 4979 scope.go:117] "RemoveContainer" containerID="a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363" Jan 30 22:01:15 crc kubenswrapper[4979]: E0130 22:01:15.228732 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363\": container with ID starting with a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363 not found: ID does not exist" containerID="a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.228817 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363"} err="failed to get container status \"a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363\": rpc error: code = NotFound desc = could not find container \"a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363\": container with ID starting with a4f95623f6ca05ed4765eacf896c2ac781cf8abdc8a3372e2d7cca85691d2363 not found: ID does not exist" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.544451 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.647777 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.647865 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.647993 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.648050 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.648091 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.648207 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdgj4\" (UniqueName: \"kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4\") pod \"4d465425-7b56-4a09-8c9f-91888b8097f9\" (UID: \"4d465425-7b56-4a09-8c9f-91888b8097f9\") " Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.650179 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run" (OuterVolumeSpecName: "var-run") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.650304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.650309 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.650334 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.651253 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts" (OuterVolumeSpecName: "scripts") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.664425 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4" (OuterVolumeSpecName: "kube-api-access-rdgj4") pod "4d465425-7b56-4a09-8c9f-91888b8097f9" (UID: "4d465425-7b56-4a09-8c9f-91888b8097f9"). InnerVolumeSpecName "kube-api-access-rdgj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750182 4979 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750224 4979 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750238 4979 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750251 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdgj4\" (UniqueName: \"kubernetes.io/projected/4d465425-7b56-4a09-8c9f-91888b8097f9-kube-api-access-rdgj4\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750264 4979 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d465425-7b56-4a09-8c9f-91888b8097f9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:15 crc kubenswrapper[4979]: I0130 22:01:15.750310 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d465425-7b56-4a09-8c9f-91888b8097f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.157567 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g-config-vrffk" event={"ID":"4d465425-7b56-4a09-8c9f-91888b8097f9","Type":"ContainerDied","Data":"3cff4c21528190bac2f5805403dd35c95c1b670810f5a9a916e00292b42d081e"} Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.157635 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cff4c21528190bac2f5805403dd35c95c1b670810f5a9a916e00292b42d081e" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.157632 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g-config-vrffk" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.320660 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zct57"] Jan 30 22:01:16 crc kubenswrapper[4979]: E0130 22:01:16.321959 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d465425-7b56-4a09-8c9f-91888b8097f9" containerName="ovn-config" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.321981 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d465425-7b56-4a09-8c9f-91888b8097f9" containerName="ovn-config" Jan 30 22:01:16 crc kubenswrapper[4979]: E0130 22:01:16.322049 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="init" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.322059 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="init" Jan 30 22:01:16 crc kubenswrapper[4979]: E0130 22:01:16.322111 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="dnsmasq-dns" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.322122 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="dnsmasq-dns" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.322669 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d465425-7b56-4a09-8c9f-91888b8097f9" containerName="ovn-config" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.322702 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" containerName="dnsmasq-dns" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.323834 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.344511 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zct57"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.439045 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bb3f-account-create-update-dc7fc"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.445753 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.451446 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.466402 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.466639 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w68r\" (UniqueName: \"kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.458145 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb3f-account-create-update-dc7fc"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.525462 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.568839 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.568962 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8772\" (UniqueName: \"kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.569014 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.569063 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w68r\" (UniqueName: \"kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.570229 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.608835 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w68r\" (UniqueName: \"kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r\") pod \"keystone-db-create-zct57\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.650118 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zct57" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.657339 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxk8g-config-vrffk"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.665202 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kxk8g-config-vrffk"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.671213 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts\") pod \"6bd0719b-952d-4080-a685-ce90c1c3bf93\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.671357 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn5kp\" (UniqueName: \"kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp\") pod \"6bd0719b-952d-4080-a685-ce90c1c3bf93\" (UID: \"6bd0719b-952d-4080-a685-ce90c1c3bf93\") " Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.671842 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8772\" (UniqueName: \"kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.671902 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.672703 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bd0719b-952d-4080-a685-ce90c1c3bf93" (UID: "6bd0719b-952d-4080-a685-ce90c1c3bf93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.672778 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.675195 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp" (OuterVolumeSpecName: "kube-api-access-pn5kp") pod "6bd0719b-952d-4080-a685-ce90c1c3bf93" (UID: "6bd0719b-952d-4080-a685-ce90c1c3bf93"). InnerVolumeSpecName "kube-api-access-pn5kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.706460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8772\" (UniqueName: \"kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772\") pod \"keystone-bb3f-account-create-update-dc7fc\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.717959 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-krqxx"] Jan 30 22:01:16 crc kubenswrapper[4979]: E0130 22:01:16.718581 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd0719b-952d-4080-a685-ce90c1c3bf93" containerName="mariadb-account-create-update" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.718605 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd0719b-952d-4080-a685-ce90c1c3bf93" containerName="mariadb-account-create-update" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.718866 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd0719b-952d-4080-a685-ce90c1c3bf93" containerName="mariadb-account-create-update" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.719709 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.732451 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0121-account-create-update-k277d"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.733764 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.751602 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0121-account-create-update-k277d"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.754585 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.773751 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bd0719b-952d-4080-a685-ce90c1c3bf93-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.773807 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn5kp\" (UniqueName: \"kubernetes.io/projected/6bd0719b-952d-4080-a685-ce90c1c3bf93-kube-api-access-pn5kp\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.789937 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kxk8g" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.800101 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-krqxx"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.836788 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.875704 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcml4\" (UniqueName: \"kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.875816 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.875953 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfnkq\" (UniqueName: \"kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.875983 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.927876 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gds8v"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.933874 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gds8v" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.951957 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gds8v"] Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.977868 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.977982 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfnkq\" (UniqueName: \"kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.978020 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.978126 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcml4\" (UniqueName: \"kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.979315 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:16 crc kubenswrapper[4979]: I0130 22:01:16.979946 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.034673 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcml4\" (UniqueName: \"kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4\") pod \"placement-db-create-krqxx\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " pod="openstack/placement-db-create-krqxx" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.040640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfnkq\" (UniqueName: \"kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq\") pod \"placement-0121-account-create-update-k277d\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.044688 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-krqxx" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.052937 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b6e4-account-create-update-kc2rf"] Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.054665 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.055899 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.060898 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b6e4-account-create-update-kc2rf"] Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.062713 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.081629 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5cxl\" (UniqueName: \"kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.081737 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.086542 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f23a16-7799-4e68-a4f9-0a392a20d0ee" path="/var/lib/kubelet/pods/16f23a16-7799-4e68-a4f9-0a392a20d0ee/volumes" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.087404 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d465425-7b56-4a09-8c9f-91888b8097f9" path="/var/lib/kubelet/pods/4d465425-7b56-4a09-8c9f-91888b8097f9/volumes" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.171988 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hplgk" event={"ID":"6bd0719b-952d-4080-a685-ce90c1c3bf93","Type":"ContainerDied","Data":"a0fc3aa14643ab8338851ee1a2c5bec0bc555e85843e53791bb00ed3c540ea43"} Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.172049 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0fc3aa14643ab8338851ee1a2c5bec0bc555e85843e53791bb00ed3c540ea43" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.172149 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hplgk" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.187018 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.187254 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5cxl\" (UniqueName: \"kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.187305 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748xs\" (UniqueName: \"kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.187371 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.188758 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.211010 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5cxl\" (UniqueName: \"kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl\") pod \"glance-db-create-gds8v\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.233351 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zct57"] Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.288865 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.289075 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-748xs\" (UniqueName: \"kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.289688 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.309681 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-748xs\" (UniqueName: \"kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs\") pod \"glance-b6e4-account-create-update-kc2rf\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.396515 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gds8v" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.426644 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.510318 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb3f-account-create-update-dc7fc"] Jan 30 22:01:17 crc kubenswrapper[4979]: W0130 22:01:17.520594 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81fec9c6_beaa_4731_b527_51284f88fb92.slice/crio-d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b WatchSource:0}: Error finding container d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b: Status 404 returned error can't find the container with id d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.612384 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-krqxx"] Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.683675 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0121-account-create-update-k277d"] Jan 30 22:01:17 crc kubenswrapper[4979]: W0130 22:01:17.694417 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3dfb7c0_8bfc_47f8_bd7d_11fa49469326.slice/crio-fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c WatchSource:0}: Error finding container fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c: Status 404 returned error can't find the container with id fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c Jan 30 22:01:17 crc kubenswrapper[4979]: I0130 22:01:17.937918 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b6e4-account-create-update-kc2rf"] Jan 30 22:01:17 crc kubenswrapper[4979]: W0130 22:01:17.948355 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0187b79_63c8_4f13_af19_892e8c9b36f9.slice/crio-138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932 WatchSource:0}: Error finding container 138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932: Status 404 returned error can't find the container with id 138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932 Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.003804 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gds8v"] Jan 30 22:01:18 crc kubenswrapper[4979]: W0130 22:01:18.011899 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83840d8c_fe62_449c_a3ab_5404215dce87.slice/crio-4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f WatchSource:0}: Error finding container 4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f: Status 404 returned error can't find the container with id 4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.191460 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gds8v" event={"ID":"83840d8c-fe62-449c-a3ab-5404215dce87","Type":"ContainerStarted","Data":"4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.193156 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-k277d" event={"ID":"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326","Type":"ContainerStarted","Data":"e769167bc04ee63c4a76adb3fc46279acc328e27ce92e25a4537f461bf8adf9c"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.193197 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-k277d" event={"ID":"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326","Type":"ContainerStarted","Data":"fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.195389 4979 generic.go:334] "Generic (PLEG): container finished" podID="4320dd9b-0e3c-474b-bb1a-e00a72ae2938" containerID="5b349812d2a4fb80dba197720305dc0e90cd12df7c5b2836dc61787bdf46e880" exitCode=0 Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.195460 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zct57" event={"ID":"4320dd9b-0e3c-474b-bb1a-e00a72ae2938","Type":"ContainerDied","Data":"5b349812d2a4fb80dba197720305dc0e90cd12df7c5b2836dc61787bdf46e880"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.195492 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zct57" event={"ID":"4320dd9b-0e3c-474b-bb1a-e00a72ae2938","Type":"ContainerStarted","Data":"0dd19a0eaa6d35cccc61aae1cab273a967511b6ad16907005ffdc3ec7b0a3d9f"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.198592 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-krqxx" event={"ID":"11b3f71c-0345-4261-8d0c-e7d700eb2932","Type":"ContainerStarted","Data":"a36d94588495170c1a561d3edd9860fe102e6b36ace67d58883c2b853f52dd2a"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.198629 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-krqxx" event={"ID":"11b3f71c-0345-4261-8d0c-e7d700eb2932","Type":"ContainerStarted","Data":"4d97788c279e351ba877ef75288ac4a26b1ff285ae90d5d47ad933b5c4cdbcba"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.201108 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb3f-account-create-update-dc7fc" event={"ID":"81fec9c6-beaa-4731-b527-51284f88fb92","Type":"ContainerStarted","Data":"d2810e946d94d2fead500cfbde94a3439ae19f7224570848395a92c854c19316"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.201149 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb3f-account-create-update-dc7fc" event={"ID":"81fec9c6-beaa-4731-b527-51284f88fb92","Type":"ContainerStarted","Data":"d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.202918 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-kc2rf" event={"ID":"e0187b79-63c8-4f13-af19-892e8c9b36f9","Type":"ContainerStarted","Data":"138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932"} Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.215518 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-0121-account-create-update-k277d" podStartSLOduration=2.215495738 podStartE2EDuration="2.215495738s" podCreationTimestamp="2026-01-30 22:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:18.214903712 +0000 UTC m=+1274.176150745" watchObservedRunningTime="2026-01-30 22:01:18.215495738 +0000 UTC m=+1274.176742771" Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.237346 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bb3f-account-create-update-dc7fc" podStartSLOduration=2.237321965 podStartE2EDuration="2.237321965s" podCreationTimestamp="2026-01-30 22:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:18.233880633 +0000 UTC m=+1274.195127676" watchObservedRunningTime="2026-01-30 22:01:18.237321965 +0000 UTC m=+1274.198568998" Jan 30 22:01:18 crc kubenswrapper[4979]: I0130 22:01:18.259129 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-krqxx" podStartSLOduration=2.259097112 podStartE2EDuration="2.259097112s" podCreationTimestamp="2026-01-30 22:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:18.252687219 +0000 UTC m=+1274.213934252" watchObservedRunningTime="2026-01-30 22:01:18.259097112 +0000 UTC m=+1274.220344145" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.213950 4979 generic.go:334] "Generic (PLEG): container finished" podID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerID="d23312f80a962608adf95395e957ee6134bf402e8fc2a1db6e478f01ef1ed902" exitCode=0 Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.214136 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerDied","Data":"d23312f80a962608adf95395e957ee6134bf402e8fc2a1db6e478f01ef1ed902"} Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.222201 4979 generic.go:334] "Generic (PLEG): container finished" podID="11b3f71c-0345-4261-8d0c-e7d700eb2932" containerID="a36d94588495170c1a561d3edd9860fe102e6b36ace67d58883c2b853f52dd2a" exitCode=0 Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.222274 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-krqxx" event={"ID":"11b3f71c-0345-4261-8d0c-e7d700eb2932","Type":"ContainerDied","Data":"a36d94588495170c1a561d3edd9860fe102e6b36ace67d58883c2b853f52dd2a"} Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.224622 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-kc2rf" event={"ID":"e0187b79-63c8-4f13-af19-892e8c9b36f9","Type":"ContainerStarted","Data":"ed4a97cfdf0ceeba9d88157069074ba43b147110d9fc2ad4b1393945bfaa8186"} Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.226616 4979 generic.go:334] "Generic (PLEG): container finished" podID="83840d8c-fe62-449c-a3ab-5404215dce87" containerID="c2e6fa2e1a73e8bf62b5ee3edf154e0d34b174fdf34335916ed3037f6db0258e" exitCode=0 Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.226788 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gds8v" event={"ID":"83840d8c-fe62-449c-a3ab-5404215dce87","Type":"ContainerDied","Data":"c2e6fa2e1a73e8bf62b5ee3edf154e0d34b174fdf34335916ed3037f6db0258e"} Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.291482 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b6e4-account-create-update-kc2rf" podStartSLOduration=2.2914562529999998 podStartE2EDuration="2.291456253s" podCreationTimestamp="2026-01-30 22:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:19.284903237 +0000 UTC m=+1275.246150270" watchObservedRunningTime="2026-01-30 22:01:19.291456253 +0000 UTC m=+1275.252703286" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.643190 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zct57" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.767797 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w68r\" (UniqueName: \"kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r\") pod \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.768359 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts\") pod \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\" (UID: \"4320dd9b-0e3c-474b-bb1a-e00a72ae2938\") " Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.769795 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4320dd9b-0e3c-474b-bb1a-e00a72ae2938" (UID: "4320dd9b-0e3c-474b-bb1a-e00a72ae2938"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.775345 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r" (OuterVolumeSpecName: "kube-api-access-7w68r") pod "4320dd9b-0e3c-474b-bb1a-e00a72ae2938" (UID: "4320dd9b-0e3c-474b-bb1a-e00a72ae2938"). InnerVolumeSpecName "kube-api-access-7w68r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.870984 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w68r\" (UniqueName: \"kubernetes.io/projected/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-kube-api-access-7w68r\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:19 crc kubenswrapper[4979]: I0130 22:01:19.871317 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4320dd9b-0e3c-474b-bb1a-e00a72ae2938-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.222831 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hplgk"] Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.239200 4979 generic.go:334] "Generic (PLEG): container finished" podID="e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" containerID="e769167bc04ee63c4a76adb3fc46279acc328e27ce92e25a4537f461bf8adf9c" exitCode=0 Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.239291 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-k277d" event={"ID":"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326","Type":"ContainerDied","Data":"e769167bc04ee63c4a76adb3fc46279acc328e27ce92e25a4537f461bf8adf9c"} Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.241539 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zct57" event={"ID":"4320dd9b-0e3c-474b-bb1a-e00a72ae2938","Type":"ContainerDied","Data":"0dd19a0eaa6d35cccc61aae1cab273a967511b6ad16907005ffdc3ec7b0a3d9f"} Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.241575 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dd19a0eaa6d35cccc61aae1cab273a967511b6ad16907005ffdc3ec7b0a3d9f" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.241548 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zct57" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.246913 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hplgk"] Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.279892 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerStarted","Data":"eb730deff98069b37c5aef76211404c3781f41d8e0443df163b818199c423131"} Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.280383 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.282129 4979 generic.go:334] "Generic (PLEG): container finished" podID="81fec9c6-beaa-4731-b527-51284f88fb92" containerID="d2810e946d94d2fead500cfbde94a3439ae19f7224570848395a92c854c19316" exitCode=0 Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.282237 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb3f-account-create-update-dc7fc" event={"ID":"81fec9c6-beaa-4731-b527-51284f88fb92","Type":"ContainerDied","Data":"d2810e946d94d2fead500cfbde94a3439ae19f7224570848395a92c854c19316"} Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.295780 4979 generic.go:334] "Generic (PLEG): container finished" podID="e0187b79-63c8-4f13-af19-892e8c9b36f9" containerID="ed4a97cfdf0ceeba9d88157069074ba43b147110d9fc2ad4b1393945bfaa8186" exitCode=0 Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.296380 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-kc2rf" event={"ID":"e0187b79-63c8-4f13-af19-892e8c9b36f9","Type":"ContainerDied","Data":"ed4a97cfdf0ceeba9d88157069074ba43b147110d9fc2ad4b1393945bfaa8186"} Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.314704 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371958.540094 podStartE2EDuration="1m18.314680919s" podCreationTimestamp="2026-01-30 22:00:02 +0000 UTC" firstStartedPulling="2026-01-30 22:00:04.154835645 +0000 UTC m=+1200.116082678" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:20.306796917 +0000 UTC m=+1276.268043950" watchObservedRunningTime="2026-01-30 22:01:20.314680919 +0000 UTC m=+1276.275927952" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.645077 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-krqxx" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.785775 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gds8v" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.790377 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcml4\" (UniqueName: \"kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4\") pod \"11b3f71c-0345-4261-8d0c-e7d700eb2932\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.790710 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts\") pod \"11b3f71c-0345-4261-8d0c-e7d700eb2932\" (UID: \"11b3f71c-0345-4261-8d0c-e7d700eb2932\") " Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.792304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11b3f71c-0345-4261-8d0c-e7d700eb2932" (UID: "11b3f71c-0345-4261-8d0c-e7d700eb2932"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.794833 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4" (OuterVolumeSpecName: "kube-api-access-dcml4") pod "11b3f71c-0345-4261-8d0c-e7d700eb2932" (UID: "11b3f71c-0345-4261-8d0c-e7d700eb2932"). InnerVolumeSpecName "kube-api-access-dcml4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.893218 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts\") pod \"83840d8c-fe62-449c-a3ab-5404215dce87\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.893372 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5cxl\" (UniqueName: \"kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl\") pod \"83840d8c-fe62-449c-a3ab-5404215dce87\" (UID: \"83840d8c-fe62-449c-a3ab-5404215dce87\") " Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.893795 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcml4\" (UniqueName: \"kubernetes.io/projected/11b3f71c-0345-4261-8d0c-e7d700eb2932-kube-api-access-dcml4\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.893812 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3f71c-0345-4261-8d0c-e7d700eb2932-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.893881 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83840d8c-fe62-449c-a3ab-5404215dce87" (UID: "83840d8c-fe62-449c-a3ab-5404215dce87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.897043 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl" (OuterVolumeSpecName: "kube-api-access-j5cxl") pod "83840d8c-fe62-449c-a3ab-5404215dce87" (UID: "83840d8c-fe62-449c-a3ab-5404215dce87"). InnerVolumeSpecName "kube-api-access-j5cxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.995443 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83840d8c-fe62-449c-a3ab-5404215dce87-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:20 crc kubenswrapper[4979]: I0130 22:01:20.995483 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5cxl\" (UniqueName: \"kubernetes.io/projected/83840d8c-fe62-449c-a3ab-5404215dce87-kube-api-access-j5cxl\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.083053 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd0719b-952d-4080-a685-ce90c1c3bf93" path="/var/lib/kubelet/pods/6bd0719b-952d-4080-a685-ce90c1c3bf93/volumes" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.306985 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-krqxx" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.306966 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-krqxx" event={"ID":"11b3f71c-0345-4261-8d0c-e7d700eb2932","Type":"ContainerDied","Data":"4d97788c279e351ba877ef75288ac4a26b1ff285ae90d5d47ad933b5c4cdbcba"} Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.307079 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d97788c279e351ba877ef75288ac4a26b1ff285ae90d5d47ad933b5c4cdbcba" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.310018 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gds8v" event={"ID":"83840d8c-fe62-449c-a3ab-5404215dce87","Type":"ContainerDied","Data":"4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f"} Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.310099 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a3bd5fb26ac1a9aa01d3874c9336f1599acbc7e7f2ce67bb842a47f50e3651f" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.310316 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gds8v" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.667205 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.790718 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.795883 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.809261 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8772\" (UniqueName: \"kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772\") pod \"81fec9c6-beaa-4731-b527-51284f88fb92\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.809418 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts\") pod \"81fec9c6-beaa-4731-b527-51284f88fb92\" (UID: \"81fec9c6-beaa-4731-b527-51284f88fb92\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.810454 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81fec9c6-beaa-4731-b527-51284f88fb92" (UID: "81fec9c6-beaa-4731-b527-51284f88fb92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.819665 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772" (OuterVolumeSpecName: "kube-api-access-p8772") pod "81fec9c6-beaa-4731-b527-51284f88fb92" (UID: "81fec9c6-beaa-4731-b527-51284f88fb92"). InnerVolumeSpecName "kube-api-access-p8772". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.911642 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-748xs\" (UniqueName: \"kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs\") pod \"e0187b79-63c8-4f13-af19-892e8c9b36f9\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.911751 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts\") pod \"e0187b79-63c8-4f13-af19-892e8c9b36f9\" (UID: \"e0187b79-63c8-4f13-af19-892e8c9b36f9\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.911846 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfnkq\" (UniqueName: \"kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq\") pod \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912062 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts\") pod \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\" (UID: \"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326\") " Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912271 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0187b79-63c8-4f13-af19-892e8c9b36f9" (UID: "e0187b79-63c8-4f13-af19-892e8c9b36f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912579 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" (UID: "e3dfb7c0-8bfc-47f8-bd7d-11fa49469326"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912654 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912743 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912787 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fec9c6-beaa-4731-b527-51284f88fb92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912806 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0187b79-63c8-4f13-af19-892e8c9b36f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.912824 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8772\" (UniqueName: \"kubernetes.io/projected/81fec9c6-beaa-4731-b527-51284f88fb92-kube-api-access-p8772\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:21 crc kubenswrapper[4979]: E0130 22:01:21.912759 4979 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 22:01:21 crc kubenswrapper[4979]: E0130 22:01:21.912868 4979 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 22:01:21 crc kubenswrapper[4979]: E0130 22:01:21.912947 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift podName:3258ad4a-d940-41c3-b875-afadfcc317d4 nodeName:}" failed. No retries permitted until 2026-01-30 22:01:53.912921713 +0000 UTC m=+1309.874168787 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift") pod "swift-storage-0" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4") : configmap "swift-ring-files" not found Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.915127 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs" (OuterVolumeSpecName: "kube-api-access-748xs") pod "e0187b79-63c8-4f13-af19-892e8c9b36f9" (UID: "e0187b79-63c8-4f13-af19-892e8c9b36f9"). InnerVolumeSpecName "kube-api-access-748xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:21 crc kubenswrapper[4979]: I0130 22:01:21.917248 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq" (OuterVolumeSpecName: "kube-api-access-vfnkq") pod "e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" (UID: "e3dfb7c0-8bfc-47f8-bd7d-11fa49469326"). InnerVolumeSpecName "kube-api-access-vfnkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.014956 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfnkq\" (UniqueName: \"kubernetes.io/projected/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326-kube-api-access-vfnkq\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.015020 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-748xs\" (UniqueName: \"kubernetes.io/projected/e0187b79-63c8-4f13-af19-892e8c9b36f9-kube-api-access-748xs\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.321431 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-k277d" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.321430 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-k277d" event={"ID":"e3dfb7c0-8bfc-47f8-bd7d-11fa49469326","Type":"ContainerDied","Data":"fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c"} Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.321572 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd2130c2f4e80ed23a0b8e6e8e0b181116c57373242e23c7d296301d174d816c" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.324167 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb3f-account-create-update-dc7fc" event={"ID":"81fec9c6-beaa-4731-b527-51284f88fb92","Type":"ContainerDied","Data":"d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b"} Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.324229 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d074b9c0cac4af69a58ec0914076f3b85a117c4b7883918133ed450530c5792b" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.324185 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-dc7fc" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.326554 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-kc2rf" event={"ID":"e0187b79-63c8-4f13-af19-892e8c9b36f9","Type":"ContainerDied","Data":"138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932"} Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.326593 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138e61260819f8515e1d1d04130238a390c019d70d8de913e0c7347bd931d932" Jan 30 22:01:22 crc kubenswrapper[4979]: I0130 22:01:22.326596 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-kc2rf" Jan 30 22:01:23 crc kubenswrapper[4979]: I0130 22:01:23.340831 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qf69d" event={"ID":"29c6531f-d97f-4f39-95bd-4c2b8a75779f","Type":"ContainerStarted","Data":"3fb131d5453fa0ed56f53c12148fc22c6f507209c0a8f0e89d75133fef0aa6cb"} Jan 30 22:01:23 crc kubenswrapper[4979]: I0130 22:01:23.369658 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qf69d" podStartSLOduration=2.495207574 podStartE2EDuration="33.369636691s" podCreationTimestamp="2026-01-30 22:00:50 +0000 UTC" firstStartedPulling="2026-01-30 22:00:51.286365144 +0000 UTC m=+1247.247612177" lastFinishedPulling="2026-01-30 22:01:22.160794261 +0000 UTC m=+1278.122041294" observedRunningTime="2026-01-30 22:01:23.366718103 +0000 UTC m=+1279.327965156" watchObservedRunningTime="2026-01-30 22:01:23.369636691 +0000 UTC m=+1279.330883754" Jan 30 22:01:23 crc kubenswrapper[4979]: I0130 22:01:23.603180 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.256068 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kkrz5"] Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258161 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0187b79-63c8-4f13-af19-892e8c9b36f9" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258237 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0187b79-63c8-4f13-af19-892e8c9b36f9" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258356 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81fec9c6-beaa-4731-b527-51284f88fb92" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258409 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="81fec9c6-beaa-4731-b527-51284f88fb92" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258481 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258533 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258599 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83840d8c-fe62-449c-a3ab-5404215dce87" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258657 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="83840d8c-fe62-449c-a3ab-5404215dce87" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258730 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4320dd9b-0e3c-474b-bb1a-e00a72ae2938" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258788 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4320dd9b-0e3c-474b-bb1a-e00a72ae2938" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: E0130 22:01:25.258863 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b3f71c-0345-4261-8d0c-e7d700eb2932" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.258923 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b3f71c-0345-4261-8d0c-e7d700eb2932" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259151 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4320dd9b-0e3c-474b-bb1a-e00a72ae2938" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259218 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0187b79-63c8-4f13-af19-892e8c9b36f9" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259280 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="83840d8c-fe62-449c-a3ab-5404215dce87" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259371 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b3f71c-0345-4261-8d0c-e7d700eb2932" containerName="mariadb-database-create" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259434 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.259495 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="81fec9c6-beaa-4731-b527-51284f88fb92" containerName="mariadb-account-create-update" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.260137 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.263923 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.305117 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kkrz5"] Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.394559 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-666vh\" (UniqueName: \"kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.394760 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.496653 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-666vh\" (UniqueName: \"kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.496822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.497801 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.516903 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-666vh\" (UniqueName: \"kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh\") pod \"root-account-create-update-kkrz5\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:25 crc kubenswrapper[4979]: I0130 22:01:25.632254 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:26 crc kubenswrapper[4979]: I0130 22:01:26.206199 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kkrz5"] Jan 30 22:01:26 crc kubenswrapper[4979]: W0130 22:01:26.211018 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod206c6cff_9f21_42be_b4d9_ebab3cb4ead8.slice/crio-97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569 WatchSource:0}: Error finding container 97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569: Status 404 returned error can't find the container with id 97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569 Jan 30 22:01:26 crc kubenswrapper[4979]: I0130 22:01:26.392076 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kkrz5" event={"ID":"206c6cff-9f21-42be-b4d9-ebab3cb4ead8","Type":"ContainerStarted","Data":"97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569"} Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.308651 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9zrqq"] Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.310517 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.312736 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tzvjb" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.313402 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.330809 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9zrqq"] Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.401549 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kkrz5" event={"ID":"206c6cff-9f21-42be-b4d9-ebab3cb4ead8","Type":"ContainerStarted","Data":"aaf97ef50c0887dcb66e3577095047927fdefa42dfe34fc18aab2b8a15ac9805"} Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.428449 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-kkrz5" podStartSLOduration=2.428411257 podStartE2EDuration="2.428411257s" podCreationTimestamp="2026-01-30 22:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:27.418543361 +0000 UTC m=+1283.379790394" watchObservedRunningTime="2026-01-30 22:01:27.428411257 +0000 UTC m=+1283.389658290" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.455893 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.455988 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.456050 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.456204 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxks\" (UniqueName: \"kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.558497 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jxks\" (UniqueName: \"kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.558666 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.558732 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.558763 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.570759 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.579140 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jxks\" (UniqueName: \"kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.580698 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.593467 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle\") pod \"glance-db-sync-9zrqq\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:27 crc kubenswrapper[4979]: I0130 22:01:27.634767 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9zrqq" Jan 30 22:01:28 crc kubenswrapper[4979]: I0130 22:01:28.425257 4979 generic.go:334] "Generic (PLEG): container finished" podID="206c6cff-9f21-42be-b4d9-ebab3cb4ead8" containerID="aaf97ef50c0887dcb66e3577095047927fdefa42dfe34fc18aab2b8a15ac9805" exitCode=0 Jan 30 22:01:28 crc kubenswrapper[4979]: I0130 22:01:28.425393 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kkrz5" event={"ID":"206c6cff-9f21-42be-b4d9-ebab3cb4ead8","Type":"ContainerDied","Data":"aaf97ef50c0887dcb66e3577095047927fdefa42dfe34fc18aab2b8a15ac9805"} Jan 30 22:01:28 crc kubenswrapper[4979]: I0130 22:01:28.571785 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9zrqq"] Jan 30 22:01:28 crc kubenswrapper[4979]: W0130 22:01:28.925667 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod023efd8e_7f0d_4ac5_80b3_db30dbb25905.slice/crio-55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71 WatchSource:0}: Error finding container 55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71: Status 404 returned error can't find the container with id 55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71 Jan 30 22:01:29 crc kubenswrapper[4979]: I0130 22:01:29.438577 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerStarted","Data":"e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6"} Jan 30 22:01:29 crc kubenswrapper[4979]: I0130 22:01:29.439260 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 22:01:29 crc kubenswrapper[4979]: I0130 22:01:29.440769 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9zrqq" event={"ID":"023efd8e-7f0d-4ac5-80b3-db30dbb25905","Type":"ContainerStarted","Data":"55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71"} Jan 30 22:01:29 crc kubenswrapper[4979]: I0130 22:01:29.478209 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.387463111 podStartE2EDuration="42.478190018s" podCreationTimestamp="2026-01-30 22:00:47 +0000 UTC" firstStartedPulling="2026-01-30 22:00:48.894724037 +0000 UTC m=+1244.855971060" lastFinishedPulling="2026-01-30 22:01:28.985450924 +0000 UTC m=+1284.946697967" observedRunningTime="2026-01-30 22:01:29.470714377 +0000 UTC m=+1285.431961410" watchObservedRunningTime="2026-01-30 22:01:29.478190018 +0000 UTC m=+1285.439437051" Jan 30 22:01:29 crc kubenswrapper[4979]: I0130 22:01:29.969412 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.117309 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-666vh\" (UniqueName: \"kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh\") pod \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.117492 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts\") pod \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\" (UID: \"206c6cff-9f21-42be-b4d9-ebab3cb4ead8\") " Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.118099 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "206c6cff-9f21-42be-b4d9-ebab3cb4ead8" (UID: "206c6cff-9f21-42be-b4d9-ebab3cb4ead8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.136561 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh" (OuterVolumeSpecName: "kube-api-access-666vh") pod "206c6cff-9f21-42be-b4d9-ebab3cb4ead8" (UID: "206c6cff-9f21-42be-b4d9-ebab3cb4ead8"). InnerVolumeSpecName "kube-api-access-666vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.220786 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-666vh\" (UniqueName: \"kubernetes.io/projected/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-kube-api-access-666vh\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.220835 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206c6cff-9f21-42be-b4d9-ebab3cb4ead8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.455128 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kkrz5" event={"ID":"206c6cff-9f21-42be-b4d9-ebab3cb4ead8","Type":"ContainerDied","Data":"97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569"} Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.455160 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kkrz5" Jan 30 22:01:30 crc kubenswrapper[4979]: I0130 22:01:30.455176 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97fcbc902b7db7b4b85fbc4f88f457922c0ab2e4582e7d120122ea3254488569" Jan 30 22:01:31 crc kubenswrapper[4979]: I0130 22:01:31.467117 4979 generic.go:334] "Generic (PLEG): container finished" podID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" containerID="3fb131d5453fa0ed56f53c12148fc22c6f507209c0a8f0e89d75133fef0aa6cb" exitCode=0 Jan 30 22:01:31 crc kubenswrapper[4979]: I0130 22:01:31.467171 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qf69d" event={"ID":"29c6531f-d97f-4f39-95bd-4c2b8a75779f","Type":"ContainerDied","Data":"3fb131d5453fa0ed56f53c12148fc22c6f507209c0a8f0e89d75133fef0aa6cb"} Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.041629 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.041704 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.834277 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914515 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914593 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914661 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914682 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914708 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4d6z\" (UniqueName: \"kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914754 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.914846 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf\") pod \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\" (UID: \"29c6531f-d97f-4f39-95bd-4c2b8a75779f\") " Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.917924 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.918380 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.924458 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z" (OuterVolumeSpecName: "kube-api-access-g4d6z") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "kube-api-access-g4d6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.930907 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.948902 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts" (OuterVolumeSpecName: "scripts") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.958325 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:32 crc kubenswrapper[4979]: I0130 22:01:32.958457 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "29c6531f-d97f-4f39-95bd-4c2b8a75779f" (UID: "29c6531f-d97f-4f39-95bd-4c2b8a75779f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016776 4979 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016826 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016843 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4d6z\" (UniqueName: \"kubernetes.io/projected/29c6531f-d97f-4f39-95bd-4c2b8a75779f-kube-api-access-g4d6z\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016855 4979 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/29c6531f-d97f-4f39-95bd-4c2b8a75779f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016870 4979 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/29c6531f-d97f-4f39-95bd-4c2b8a75779f-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016933 4979 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.016967 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c6531f-d97f-4f39-95bd-4c2b8a75779f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.492305 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qf69d" event={"ID":"29c6531f-d97f-4f39-95bd-4c2b8a75779f","Type":"ContainerDied","Data":"1cded23ff5ee2d2e3497c55f604788871e1bcd1e4e1acb05a7084523b596fe7e"} Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.492380 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cded23ff5ee2d2e3497c55f604788871e1bcd1e4e1acb05a7084523b596fe7e" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.492421 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qf69d" Jan 30 22:01:33 crc kubenswrapper[4979]: I0130 22:01:33.674290 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.222473 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mvqgx"] Jan 30 22:01:34 crc kubenswrapper[4979]: E0130 22:01:34.222883 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" containerName="swift-ring-rebalance" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.222899 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" containerName="swift-ring-rebalance" Jan 30 22:01:34 crc kubenswrapper[4979]: E0130 22:01:34.222920 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206c6cff-9f21-42be-b4d9-ebab3cb4ead8" containerName="mariadb-account-create-update" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.222930 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="206c6cff-9f21-42be-b4d9-ebab3cb4ead8" containerName="mariadb-account-create-update" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.223109 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="206c6cff-9f21-42be-b4d9-ebab3cb4ead8" containerName="mariadb-account-create-update" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.223132 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" containerName="swift-ring-rebalance" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.223720 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.237379 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mvqgx"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.347270 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlkh4\" (UniqueName: \"kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.347337 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.439626 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-18a2-account-create-update-xznvc"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.441316 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.444796 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.448621 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlkh4\" (UniqueName: \"kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.448853 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.450003 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.451405 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-18a2-account-create-update-xznvc"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.481327 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlkh4\" (UniqueName: \"kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4\") pod \"cinder-db-create-mvqgx\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.526438 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-95kjb"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.527690 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.547362 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-95kjb"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.550808 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f6xq\" (UniqueName: \"kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.551086 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.606834 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.623539 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d511-account-create-update-jtbft"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.624836 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.629150 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.647083 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d511-account-create-update-jtbft"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.655312 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f6xq\" (UniqueName: \"kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.655372 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.655478 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.655523 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgmk5\" (UniqueName: \"kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.658506 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.680624 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f6xq\" (UniqueName: \"kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq\") pod \"cinder-18a2-account-create-update-xznvc\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.753330 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-tj4gc"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.754753 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.757221 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.757282 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.757320 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgmk5\" (UniqueName: \"kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.757485 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrtr\" (UniqueName: \"kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.758305 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.758757 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.758888 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dx6hv" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.759296 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.759597 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.760875 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.770045 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5880-account-create-update-nvk6p"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.771275 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.775295 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.785245 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5880-account-create-update-nvk6p"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.821581 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tj4gc"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.822103 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgmk5\" (UniqueName: \"kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5\") pod \"barbican-db-create-95kjb\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.850413 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.858641 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdvm\" (UniqueName: \"kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.858689 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrtr\" (UniqueName: \"kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.858719 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.858916 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.859062 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.859101 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.859207 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5bm\" (UniqueName: \"kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.860683 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.904466 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrtr\" (UniqueName: \"kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr\") pod \"neutron-d511-account-create-update-jtbft\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.904536 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-svtcv"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.906929 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.940547 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-svtcv"] Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.962801 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.962906 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v5bm\" (UniqueName: \"kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.963120 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.963234 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx2nb\" (UniqueName: \"kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.963339 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgdvm\" (UniqueName: \"kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.963384 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.963550 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:34 crc kubenswrapper[4979]: I0130 22:01:34.964232 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.000506 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.000828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.012828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgdvm\" (UniqueName: \"kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm\") pod \"keystone-db-sync-tj4gc\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.026288 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v5bm\" (UniqueName: \"kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm\") pod \"barbican-5880-account-create-update-nvk6p\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.058682 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.065587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.065961 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx2nb\" (UniqueName: \"kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.070031 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.096699 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx2nb\" (UniqueName: \"kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb\") pod \"neutron-db-create-svtcv\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.124694 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.164940 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.235806 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.332646 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mvqgx"] Jan 30 22:01:35 crc kubenswrapper[4979]: W0130 22:01:35.380584 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2df91e7_6710_4ee4_a671_4b19dc5c2798.slice/crio-50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659 WatchSource:0}: Error finding container 50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659: Status 404 returned error can't find the container with id 50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659 Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.601258 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mvqgx" event={"ID":"a2df91e7-6710-4ee4-a671-4b19dc5c2798","Type":"ContainerStarted","Data":"50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659"} Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.642899 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-95kjb"] Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.751766 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-18a2-account-create-update-xznvc"] Jan 30 22:01:35 crc kubenswrapper[4979]: W0130 22:01:35.760268 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79a4cbbe_93e4_414e_9ca3_2cd182d6ed96.slice/crio-6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be WatchSource:0}: Error finding container 6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be: Status 404 returned error can't find the container with id 6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.852366 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d511-account-create-update-jtbft"] Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.922187 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5880-account-create-update-nvk6p"] Jan 30 22:01:35 crc kubenswrapper[4979]: I0130 22:01:35.965840 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tj4gc"] Jan 30 22:01:36 crc kubenswrapper[4979]: W0130 22:01:36.148686 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dd39b08_adf2_44da_b301_8e8694590426.slice/crio-cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27 WatchSource:0}: Error finding container cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27: Status 404 returned error can't find the container with id cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27 Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.149864 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-svtcv"] Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.616611 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-95kjb" event={"ID":"175f02fa-3089-4350-a658-c939f6e6ef9f","Type":"ContainerDied","Data":"e944b74595e093897d5163f1d6f5e2841d79cfe7a27b236506370f93704312ba"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.616714 4979 generic.go:334] "Generic (PLEG): container finished" podID="175f02fa-3089-4350-a658-c939f6e6ef9f" containerID="e944b74595e093897d5163f1d6f5e2841d79cfe7a27b236506370f93704312ba" exitCode=0 Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.617380 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-95kjb" event={"ID":"175f02fa-3089-4350-a658-c939f6e6ef9f","Type":"ContainerStarted","Data":"e28eadd933e61c8cf81e7798598f35cc0b2d5d5bba932062fca900134c507514"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.625523 4979 generic.go:334] "Generic (PLEG): container finished" podID="a2df91e7-6710-4ee4-a671-4b19dc5c2798" containerID="79ca49dab9783f66a2ceb714d9fa0a2f61e36e1771efaec7c095de2ed5249a25" exitCode=0 Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.625597 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mvqgx" event={"ID":"a2df91e7-6710-4ee4-a671-4b19dc5c2798","Type":"ContainerDied","Data":"79ca49dab9783f66a2ceb714d9fa0a2f61e36e1771efaec7c095de2ed5249a25"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.638766 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-xznvc" event={"ID":"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96","Type":"ContainerStarted","Data":"60202a94174e28cbc487661cc024c8a1cf6c22c3cad5bc10eaa16a6b4124fa58"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.638848 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-xznvc" event={"ID":"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96","Type":"ContainerStarted","Data":"6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.654831 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-jtbft" event={"ID":"bc3a0116-2f4a-4dde-bf99-56759f4349bc","Type":"ContainerStarted","Data":"046e829584329e51995faf5e5f7dfeed89e26cdea94351a2f27847446a921702"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.654893 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-jtbft" event={"ID":"bc3a0116-2f4a-4dde-bf99-56759f4349bc","Type":"ContainerStarted","Data":"ea00611de73705c35473a924d4f7c549482419a2f78ecfeaf84b3d1d727771aa"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.657728 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-svtcv" event={"ID":"6dd39b08-adf2-44da-b301-8e8694590426","Type":"ContainerStarted","Data":"e569170f774015f0e1ddac11812bbd2f299bdb3f6dc5151d5fb36790b57f47e8"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.657784 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-svtcv" event={"ID":"6dd39b08-adf2-44da-b301-8e8694590426","Type":"ContainerStarted","Data":"cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.667522 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5880-account-create-update-nvk6p" event={"ID":"f8b67e98-62a7-4a61-835e-8b7ec20167f3","Type":"ContainerStarted","Data":"bf7d515c41a90616fc9c098ab7b86a49d6e45238cee5250dcba6e62cadfccb13"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.667586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5880-account-create-update-nvk6p" event={"ID":"f8b67e98-62a7-4a61-835e-8b7ec20167f3","Type":"ContainerStarted","Data":"503e7d3ba39535a71f67070c35c2d482f374ad3f2b694d7668c84e006b975dc6"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.669466 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tj4gc" event={"ID":"fac7007d-8147-477c-a42e-2463290030ff","Type":"ContainerStarted","Data":"b66b3c202ff49e3b9a37dcd38590680dac6fdd11f7dbfdf69a3e9361cda17e7e"} Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.698954 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-svtcv" podStartSLOduration=2.698921516 podStartE2EDuration="2.698921516s" podCreationTimestamp="2026-01-30 22:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:36.681312981 +0000 UTC m=+1292.642560014" watchObservedRunningTime="2026-01-30 22:01:36.698921516 +0000 UTC m=+1292.660168549" Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.703402 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-18a2-account-create-update-xznvc" podStartSLOduration=2.703380525 podStartE2EDuration="2.703380525s" podCreationTimestamp="2026-01-30 22:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:36.701164775 +0000 UTC m=+1292.662411808" watchObservedRunningTime="2026-01-30 22:01:36.703380525 +0000 UTC m=+1292.664627558" Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.732554 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d511-account-create-update-jtbft" podStartSLOduration=2.73252403 podStartE2EDuration="2.73252403s" podCreationTimestamp="2026-01-30 22:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:36.719487248 +0000 UTC m=+1292.680734291" watchObservedRunningTime="2026-01-30 22:01:36.73252403 +0000 UTC m=+1292.693771063" Jan 30 22:01:36 crc kubenswrapper[4979]: I0130 22:01:36.808383 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5880-account-create-update-nvk6p" podStartSLOduration=2.808357629 podStartE2EDuration="2.808357629s" podCreationTimestamp="2026-01-30 22:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:36.753324268 +0000 UTC m=+1292.714571301" watchObservedRunningTime="2026-01-30 22:01:36.808357629 +0000 UTC m=+1292.769604662" Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.689929 4979 generic.go:334] "Generic (PLEG): container finished" podID="79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" containerID="60202a94174e28cbc487661cc024c8a1cf6c22c3cad5bc10eaa16a6b4124fa58" exitCode=0 Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.690066 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-xznvc" event={"ID":"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96","Type":"ContainerDied","Data":"60202a94174e28cbc487661cc024c8a1cf6c22c3cad5bc10eaa16a6b4124fa58"} Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.693764 4979 generic.go:334] "Generic (PLEG): container finished" podID="bc3a0116-2f4a-4dde-bf99-56759f4349bc" containerID="046e829584329e51995faf5e5f7dfeed89e26cdea94351a2f27847446a921702" exitCode=0 Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.693813 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-jtbft" event={"ID":"bc3a0116-2f4a-4dde-bf99-56759f4349bc","Type":"ContainerDied","Data":"046e829584329e51995faf5e5f7dfeed89e26cdea94351a2f27847446a921702"} Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.696682 4979 generic.go:334] "Generic (PLEG): container finished" podID="6dd39b08-adf2-44da-b301-8e8694590426" containerID="e569170f774015f0e1ddac11812bbd2f299bdb3f6dc5151d5fb36790b57f47e8" exitCode=0 Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.696724 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-svtcv" event={"ID":"6dd39b08-adf2-44da-b301-8e8694590426","Type":"ContainerDied","Data":"e569170f774015f0e1ddac11812bbd2f299bdb3f6dc5151d5fb36790b57f47e8"} Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.700156 4979 generic.go:334] "Generic (PLEG): container finished" podID="f8b67e98-62a7-4a61-835e-8b7ec20167f3" containerID="bf7d515c41a90616fc9c098ab7b86a49d6e45238cee5250dcba6e62cadfccb13" exitCode=0 Jan 30 22:01:37 crc kubenswrapper[4979]: I0130 22:01:37.700270 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5880-account-create-update-nvk6p" event={"ID":"f8b67e98-62a7-4a61-835e-8b7ec20167f3","Type":"ContainerDied","Data":"bf7d515c41a90616fc9c098ab7b86a49d6e45238cee5250dcba6e62cadfccb13"} Jan 30 22:01:48 crc kubenswrapper[4979]: I0130 22:01:48.273768 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 22:01:49 crc kubenswrapper[4979]: E0130 22:01:49.668412 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 30 22:01:49 crc kubenswrapper[4979]: E0130 22:01:49.669050 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jxks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-9zrqq_openstack(023efd8e-7f0d-4ac5-80b3-db30dbb25905): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:01:49 crc kubenswrapper[4979]: E0130 22:01:49.670757 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-9zrqq" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.734716 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.738835 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.747535 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.795495 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.797206 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.808183 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts\") pod \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.808399 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v5bm\" (UniqueName: \"kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm\") pod \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.809512 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc3a0116-2f4a-4dde-bf99-56759f4349bc" (UID: "bc3a0116-2f4a-4dde-bf99-56759f4349bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.809968 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgmk5\" (UniqueName: \"kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5\") pod \"175f02fa-3089-4350-a658-c939f6e6ef9f\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.810008 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts\") pod \"175f02fa-3089-4350-a658-c939f6e6ef9f\" (UID: \"175f02fa-3089-4350-a658-c939f6e6ef9f\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.810043 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts\") pod \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\" (UID: \"f8b67e98-62a7-4a61-835e-8b7ec20167f3\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.810075 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrtr\" (UniqueName: \"kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr\") pod \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\" (UID: \"bc3a0116-2f4a-4dde-bf99-56759f4349bc\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.810620 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc3a0116-2f4a-4dde-bf99-56759f4349bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.811479 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "175f02fa-3089-4350-a658-c939f6e6ef9f" (UID: "175f02fa-3089-4350-a658-c939f6e6ef9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.811592 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8b67e98-62a7-4a61-835e-8b7ec20167f3" (UID: "f8b67e98-62a7-4a61-835e-8b7ec20167f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.817496 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.843198 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr" (OuterVolumeSpecName: "kube-api-access-jbrtr") pod "bc3a0116-2f4a-4dde-bf99-56759f4349bc" (UID: "bc3a0116-2f4a-4dde-bf99-56759f4349bc"). InnerVolumeSpecName "kube-api-access-jbrtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.844291 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm" (OuterVolumeSpecName: "kube-api-access-4v5bm") pod "f8b67e98-62a7-4a61-835e-8b7ec20167f3" (UID: "f8b67e98-62a7-4a61-835e-8b7ec20167f3"). InnerVolumeSpecName "kube-api-access-4v5bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.847383 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5" (OuterVolumeSpecName: "kube-api-access-dgmk5") pod "175f02fa-3089-4350-a658-c939f6e6ef9f" (UID: "175f02fa-3089-4350-a658-c939f6e6ef9f"). InnerVolumeSpecName "kube-api-access-dgmk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.856575 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-svtcv" event={"ID":"6dd39b08-adf2-44da-b301-8e8694590426","Type":"ContainerDied","Data":"cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.856623 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cadb61eae8bb6de1416c95763f739a2cc5dec32932d6be854dd0b0c8fd871d27" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.856732 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-svtcv" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.863094 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5880-account-create-update-nvk6p" event={"ID":"f8b67e98-62a7-4a61-835e-8b7ec20167f3","Type":"ContainerDied","Data":"503e7d3ba39535a71f67070c35c2d482f374ad3f2b694d7668c84e006b975dc6"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.863394 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503e7d3ba39535a71f67070c35c2d482f374ad3f2b694d7668c84e006b975dc6" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.863528 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5880-account-create-update-nvk6p" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.873547 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-95kjb" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.873698 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-95kjb" event={"ID":"175f02fa-3089-4350-a658-c939f6e6ef9f","Type":"ContainerDied","Data":"e28eadd933e61c8cf81e7798598f35cc0b2d5d5bba932062fca900134c507514"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.874289 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e28eadd933e61c8cf81e7798598f35cc0b2d5d5bba932062fca900134c507514" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.878313 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mvqgx" event={"ID":"a2df91e7-6710-4ee4-a671-4b19dc5c2798","Type":"ContainerDied","Data":"50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.878351 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50dd0fc0b49e80e3c5debd93de5d7780d46ec7c1df30999b8a0ad23d4fee2659" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.878444 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mvqgx" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.881671 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-xznvc" event={"ID":"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96","Type":"ContainerDied","Data":"6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.881746 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6217a069089e05bd31025a7ffa6ff66f1e5f4c74966d6a09befd4b32e119c8be" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.881701 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-xznvc" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.885195 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-jtbft" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.886342 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-jtbft" event={"ID":"bc3a0116-2f4a-4dde-bf99-56759f4349bc","Type":"ContainerDied","Data":"ea00611de73705c35473a924d4f7c549482419a2f78ecfeaf84b3d1d727771aa"} Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.886449 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea00611de73705c35473a924d4f7c549482419a2f78ecfeaf84b3d1d727771aa" Jan 30 22:01:49 crc kubenswrapper[4979]: E0130 22:01:49.886459 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-9zrqq" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913230 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts\") pod \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913312 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx2nb\" (UniqueName: \"kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb\") pod \"6dd39b08-adf2-44da-b301-8e8694590426\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913404 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlkh4\" (UniqueName: \"kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4\") pod \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\" (UID: \"a2df91e7-6710-4ee4-a671-4b19dc5c2798\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913482 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f6xq\" (UniqueName: \"kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq\") pod \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913524 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts\") pod \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\" (UID: \"79a4cbbe-93e4-414e-9ca3-2cd182d6ed96\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913587 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts\") pod \"6dd39b08-adf2-44da-b301-8e8694590426\" (UID: \"6dd39b08-adf2-44da-b301-8e8694590426\") " Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.913746 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2df91e7-6710-4ee4-a671-4b19dc5c2798" (UID: "a2df91e7-6710-4ee4-a671-4b19dc5c2798"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914014 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v5bm\" (UniqueName: \"kubernetes.io/projected/f8b67e98-62a7-4a61-835e-8b7ec20167f3-kube-api-access-4v5bm\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914051 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgmk5\" (UniqueName: \"kubernetes.io/projected/175f02fa-3089-4350-a658-c939f6e6ef9f-kube-api-access-dgmk5\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914061 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2df91e7-6710-4ee4-a671-4b19dc5c2798-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914076 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/175f02fa-3089-4350-a658-c939f6e6ef9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914086 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8b67e98-62a7-4a61-835e-8b7ec20167f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914096 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrtr\" (UniqueName: \"kubernetes.io/projected/bc3a0116-2f4a-4dde-bf99-56759f4349bc-kube-api-access-jbrtr\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.914793 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6dd39b08-adf2-44da-b301-8e8694590426" (UID: "6dd39b08-adf2-44da-b301-8e8694590426"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.915321 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" (UID: "79a4cbbe-93e4-414e-9ca3-2cd182d6ed96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.933341 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4" (OuterVolumeSpecName: "kube-api-access-hlkh4") pod "a2df91e7-6710-4ee4-a671-4b19dc5c2798" (UID: "a2df91e7-6710-4ee4-a671-4b19dc5c2798"). InnerVolumeSpecName "kube-api-access-hlkh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.942906 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb" (OuterVolumeSpecName: "kube-api-access-nx2nb") pod "6dd39b08-adf2-44da-b301-8e8694590426" (UID: "6dd39b08-adf2-44da-b301-8e8694590426"). InnerVolumeSpecName "kube-api-access-nx2nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:49 crc kubenswrapper[4979]: I0130 22:01:49.943459 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq" (OuterVolumeSpecName: "kube-api-access-7f6xq") pod "79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" (UID: "79a4cbbe-93e4-414e-9ca3-2cd182d6ed96"). InnerVolumeSpecName "kube-api-access-7f6xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:50 crc kubenswrapper[4979]: I0130 22:01:50.016172 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlkh4\" (UniqueName: \"kubernetes.io/projected/a2df91e7-6710-4ee4-a671-4b19dc5c2798-kube-api-access-hlkh4\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:50 crc kubenswrapper[4979]: I0130 22:01:50.016236 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f6xq\" (UniqueName: \"kubernetes.io/projected/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-kube-api-access-7f6xq\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:50 crc kubenswrapper[4979]: I0130 22:01:50.016263 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:50 crc kubenswrapper[4979]: I0130 22:01:50.016283 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd39b08-adf2-44da-b301-8e8694590426-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:50 crc kubenswrapper[4979]: I0130 22:01:50.016302 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx2nb\" (UniqueName: \"kubernetes.io/projected/6dd39b08-adf2-44da-b301-8e8694590426-kube-api-access-nx2nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:53 crc kubenswrapper[4979]: E0130 22:01:53.245087 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Jan 30 22:01:53 crc kubenswrapper[4979]: E0130 22:01:53.245990 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgdvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-tj4gc_openstack(fac7007d-8147-477c-a42e-2463290030ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:01:53 crc kubenswrapper[4979]: E0130 22:01:53.250172 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-tj4gc" podUID="fac7007d-8147-477c-a42e-2463290030ff" Jan 30 22:01:53 crc kubenswrapper[4979]: E0130 22:01:53.921522 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-tj4gc" podUID="fac7007d-8147-477c-a42e-2463290030ff" Jan 30 22:01:54 crc kubenswrapper[4979]: I0130 22:01:54.001880 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:01:54 crc kubenswrapper[4979]: I0130 22:01:54.010769 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"swift-storage-0\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " pod="openstack/swift-storage-0" Jan 30 22:01:54 crc kubenswrapper[4979]: I0130 22:01:54.201167 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 22:01:54 crc kubenswrapper[4979]: I0130 22:01:54.841365 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:01:54 crc kubenswrapper[4979]: I0130 22:01:54.949439 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"b5f19eb16c0b9ad8d89d2db8aaef61e8a41afec6d53e30023f1498d447572ee3"} Jan 30 22:01:56 crc kubenswrapper[4979]: I0130 22:01:56.969607 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2"} Jan 30 22:01:56 crc kubenswrapper[4979]: I0130 22:01:56.970414 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff"} Jan 30 22:01:56 crc kubenswrapper[4979]: I0130 22:01:56.970431 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316"} Jan 30 22:01:56 crc kubenswrapper[4979]: I0130 22:01:56.970444 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"9ebde5265edc1759790d3676946d4106e58a2899f6ca92dff07d39b2c655de8d"} Jan 30 22:02:02 crc kubenswrapper[4979]: I0130 22:02:02.039408 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:02:02 crc kubenswrapper[4979]: I0130 22:02:02.040110 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:02:04 crc kubenswrapper[4979]: I0130 22:02:04.055207 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601"} Jan 30 22:02:04 crc kubenswrapper[4979]: I0130 22:02:04.057179 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3"} Jan 30 22:02:04 crc kubenswrapper[4979]: I0130 22:02:04.057218 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"20e0cc7660bd336e138f9bda2b90b0037324c98e23852b050c094fc3ec2b9759"} Jan 30 22:02:05 crc kubenswrapper[4979]: I0130 22:02:05.082854 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551"} Jan 30 22:02:05 crc kubenswrapper[4979]: I0130 22:02:05.083293 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9zrqq" event={"ID":"023efd8e-7f0d-4ac5-80b3-db30dbb25905","Type":"ContainerStarted","Data":"2a983b0743f2b2bf9c796ed27b781636f6d8f9667cb41df9212903e83c5acc92"} Jan 30 22:02:05 crc kubenswrapper[4979]: I0130 22:02:05.111190 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9zrqq" podStartSLOduration=3.5459591169999998 podStartE2EDuration="38.111166151s" podCreationTimestamp="2026-01-30 22:01:27 +0000 UTC" firstStartedPulling="2026-01-30 22:01:28.982162165 +0000 UTC m=+1284.943409198" lastFinishedPulling="2026-01-30 22:02:03.547369199 +0000 UTC m=+1319.508616232" observedRunningTime="2026-01-30 22:02:05.102070699 +0000 UTC m=+1321.063317732" watchObservedRunningTime="2026-01-30 22:02:05.111166151 +0000 UTC m=+1321.072413184" Jan 30 22:02:07 crc kubenswrapper[4979]: I0130 22:02:07.112413 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56"} Jan 30 22:02:07 crc kubenswrapper[4979]: I0130 22:02:07.113276 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"34b69c813947c1a15abad9192e8f1cfc7295fd0dfaea4369b35dee2f2f213420"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.125177 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tj4gc" event={"ID":"fac7007d-8147-477c-a42e-2463290030ff","Type":"ContainerStarted","Data":"8b19c508f19bd2ec6e83e05f1f297998c5d48770b15b97debc2ae68900fd6e73"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.135528 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"91cb53bd2b951f74cd0d66aa9f24d08e3c7022176624a9c9ffd768ceb393e191"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.135583 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.135600 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.135631 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3"} Jan 30 22:02:08 crc kubenswrapper[4979]: I0130 22:02:08.153785 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-tj4gc" podStartSLOduration=2.59324292 podStartE2EDuration="34.153758646s" podCreationTimestamp="2026-01-30 22:01:34 +0000 UTC" firstStartedPulling="2026-01-30 22:01:35.985966176 +0000 UTC m=+1291.947213209" lastFinishedPulling="2026-01-30 22:02:07.546481902 +0000 UTC m=+1323.507728935" observedRunningTime="2026-01-30 22:02:08.142987729 +0000 UTC m=+1324.104234772" watchObservedRunningTime="2026-01-30 22:02:08.153758646 +0000 UTC m=+1324.115005689" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.162556 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerStarted","Data":"453f3cdac4ea155af06a1a316c55ca43062a6082a47aacfa7561eb05a7b482b3"} Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.209413 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=69.546146983 podStartE2EDuration="1m21.209391829s" podCreationTimestamp="2026-01-30 22:00:48 +0000 UTC" firstStartedPulling="2026-01-30 22:01:54.852498804 +0000 UTC m=+1310.813745847" lastFinishedPulling="2026-01-30 22:02:06.51574366 +0000 UTC m=+1322.476990693" observedRunningTime="2026-01-30 22:02:09.201960742 +0000 UTC m=+1325.163207795" watchObservedRunningTime="2026-01-30 22:02:09.209391829 +0000 UTC m=+1325.170638862" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525423 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525849 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b67e98-62a7-4a61-835e-8b7ec20167f3" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525873 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b67e98-62a7-4a61-835e-8b7ec20167f3" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525888 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175f02fa-3089-4350-a658-c939f6e6ef9f" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525894 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="175f02fa-3089-4350-a658-c939f6e6ef9f" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525907 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2df91e7-6710-4ee4-a671-4b19dc5c2798" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525914 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2df91e7-6710-4ee4-a671-4b19dc5c2798" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525926 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525931 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525955 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3a0116-2f4a-4dde-bf99-56759f4349bc" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525963 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3a0116-2f4a-4dde-bf99-56759f4349bc" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: E0130 22:02:09.525984 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd39b08-adf2-44da-b301-8e8694590426" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.525992 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd39b08-adf2-44da-b301-8e8694590426" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526163 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b67e98-62a7-4a61-835e-8b7ec20167f3" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526177 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd39b08-adf2-44da-b301-8e8694590426" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526188 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526203 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="175f02fa-3089-4350-a658-c939f6e6ef9f" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526212 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3a0116-2f4a-4dde-bf99-56759f4349bc" containerName="mariadb-account-create-update" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.526223 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2df91e7-6710-4ee4-a671-4b19dc5c2798" containerName="mariadb-database-create" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.527330 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.530076 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.544643 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.587641 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.587795 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.587876 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.587946 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xmst\" (UniqueName: \"kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.590601 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.590873 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.692663 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.693477 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.693528 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.693568 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.693604 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xmst\" (UniqueName: \"kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.693689 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.694123 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.694780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.694929 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.695011 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.695110 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.716980 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xmst\" (UniqueName: \"kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst\") pod \"dnsmasq-dns-764c5664d7-ncb4v\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:09 crc kubenswrapper[4979]: I0130 22:02:09.891466 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:10 crc kubenswrapper[4979]: I0130 22:02:10.363525 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:10 crc kubenswrapper[4979]: W0130 22:02:10.368219 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f00645b_b1f2_447f_b5a0_b38147768d8f.slice/crio-b6e379d574c063cdcbb91a31ed765e720bbacf97ae1abe1ce4c0719b7343efce WatchSource:0}: Error finding container b6e379d574c063cdcbb91a31ed765e720bbacf97ae1abe1ce4c0719b7343efce: Status 404 returned error can't find the container with id b6e379d574c063cdcbb91a31ed765e720bbacf97ae1abe1ce4c0719b7343efce Jan 30 22:02:11 crc kubenswrapper[4979]: I0130 22:02:11.182318 4979 generic.go:334] "Generic (PLEG): container finished" podID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerID="50c11c6ba1f573a9bebf130bbdbf73d94684ce5c18c6c0476848fec8b87a100e" exitCode=0 Jan 30 22:02:11 crc kubenswrapper[4979]: I0130 22:02:11.182404 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" event={"ID":"9f00645b-b1f2-447f-b5a0-b38147768d8f","Type":"ContainerDied","Data":"50c11c6ba1f573a9bebf130bbdbf73d94684ce5c18c6c0476848fec8b87a100e"} Jan 30 22:02:11 crc kubenswrapper[4979]: I0130 22:02:11.182720 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" event={"ID":"9f00645b-b1f2-447f-b5a0-b38147768d8f","Type":"ContainerStarted","Data":"b6e379d574c063cdcbb91a31ed765e720bbacf97ae1abe1ce4c0719b7343efce"} Jan 30 22:02:12 crc kubenswrapper[4979]: I0130 22:02:12.194734 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" event={"ID":"9f00645b-b1f2-447f-b5a0-b38147768d8f","Type":"ContainerStarted","Data":"c3c7777c050b25f075cd791441d8708f24d73c7a0fb4c8e23a52ac4d1a4fe2d9"} Jan 30 22:02:12 crc kubenswrapper[4979]: I0130 22:02:12.195590 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:12 crc kubenswrapper[4979]: I0130 22:02:12.228957 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" podStartSLOduration=3.228924631 podStartE2EDuration="3.228924631s" podCreationTimestamp="2026-01-30 22:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:12.215610166 +0000 UTC m=+1328.176857219" watchObservedRunningTime="2026-01-30 22:02:12.228924631 +0000 UTC m=+1328.190171684" Jan 30 22:02:13 crc kubenswrapper[4979]: I0130 22:02:13.209691 4979 generic.go:334] "Generic (PLEG): container finished" podID="fac7007d-8147-477c-a42e-2463290030ff" containerID="8b19c508f19bd2ec6e83e05f1f297998c5d48770b15b97debc2ae68900fd6e73" exitCode=0 Jan 30 22:02:13 crc kubenswrapper[4979]: I0130 22:02:13.211123 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tj4gc" event={"ID":"fac7007d-8147-477c-a42e-2463290030ff","Type":"ContainerDied","Data":"8b19c508f19bd2ec6e83e05f1f297998c5d48770b15b97debc2ae68900fd6e73"} Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.543656 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.593505 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data\") pod \"fac7007d-8147-477c-a42e-2463290030ff\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.593785 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgdvm\" (UniqueName: \"kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm\") pod \"fac7007d-8147-477c-a42e-2463290030ff\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.594726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle\") pod \"fac7007d-8147-477c-a42e-2463290030ff\" (UID: \"fac7007d-8147-477c-a42e-2463290030ff\") " Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.600645 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm" (OuterVolumeSpecName: "kube-api-access-xgdvm") pod "fac7007d-8147-477c-a42e-2463290030ff" (UID: "fac7007d-8147-477c-a42e-2463290030ff"). InnerVolumeSpecName "kube-api-access-xgdvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.625218 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fac7007d-8147-477c-a42e-2463290030ff" (UID: "fac7007d-8147-477c-a42e-2463290030ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.648893 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data" (OuterVolumeSpecName: "config-data") pod "fac7007d-8147-477c-a42e-2463290030ff" (UID: "fac7007d-8147-477c-a42e-2463290030ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.696854 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgdvm\" (UniqueName: \"kubernetes.io/projected/fac7007d-8147-477c-a42e-2463290030ff-kube-api-access-xgdvm\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.697195 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:14 crc kubenswrapper[4979]: I0130 22:02:14.697208 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7007d-8147-477c-a42e-2463290030ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.232898 4979 generic.go:334] "Generic (PLEG): container finished" podID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" containerID="2a983b0743f2b2bf9c796ed27b781636f6d8f9667cb41df9212903e83c5acc92" exitCode=0 Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.233007 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9zrqq" event={"ID":"023efd8e-7f0d-4ac5-80b3-db30dbb25905","Type":"ContainerDied","Data":"2a983b0743f2b2bf9c796ed27b781636f6d8f9667cb41df9212903e83c5acc92"} Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.235514 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tj4gc" event={"ID":"fac7007d-8147-477c-a42e-2463290030ff","Type":"ContainerDied","Data":"b66b3c202ff49e3b9a37dcd38590680dac6fdd11f7dbfdf69a3e9361cda17e7e"} Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.235553 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b66b3c202ff49e3b9a37dcd38590680dac6fdd11f7dbfdf69a3e9361cda17e7e" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.235628 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tj4gc" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.540530 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.540801 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="dnsmasq-dns" containerID="cri-o://c3c7777c050b25f075cd791441d8708f24d73c7a0fb4c8e23a52ac4d1a4fe2d9" gracePeriod=10 Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.562845 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xq8ms"] Jan 30 22:02:15 crc kubenswrapper[4979]: E0130 22:02:15.563363 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac7007d-8147-477c-a42e-2463290030ff" containerName="keystone-db-sync" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.563382 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac7007d-8147-477c-a42e-2463290030ff" containerName="keystone-db-sync" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.563600 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac7007d-8147-477c-a42e-2463290030ff" containerName="keystone-db-sync" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.564353 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.574739 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xq8ms"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.575441 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.575868 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.577794 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.584724 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dx6hv" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.584908 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.609372 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613583 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613677 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613782 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613824 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jcz\" (UniqueName: \"kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613897 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.613942 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.630784 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.631246 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716282 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716353 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxp7g\" (UniqueName: \"kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716433 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716462 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716493 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716540 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716583 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716609 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716645 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.716672 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4jcz\" (UniqueName: \"kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.733711 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.745902 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.754400 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.757576 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.764105 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4jcz\" (UniqueName: \"kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.764639 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys\") pod \"keystone-bootstrap-xq8ms\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.766041 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-cf4cw"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.776704 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.783521 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5h7pb" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.783914 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.784157 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.793990 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-cf4cw"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818717 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818789 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818810 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njts7\" (UniqueName: \"kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818964 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.818999 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.819064 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxp7g\" (UniqueName: \"kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.819117 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.819148 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.819209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.819299 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.820603 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.821309 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.821340 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.821470 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.821677 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.889742 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.894782 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxp7g\" (UniqueName: \"kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g\") pod \"dnsmasq-dns-5959f8865f-82ngt\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.920809 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.920865 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.920910 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njts7\" (UniqueName: \"kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.920943 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.920974 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.921059 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.925129 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.930780 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qjfmb"] Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.932627 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.951811 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.952715 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cgj89" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.953059 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.953303 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.974993 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:15 crc kubenswrapper[4979]: I0130 22:02:15.994437 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.000661 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.017976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njts7\" (UniqueName: \"kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7\") pod \"cinder-db-sync-cf4cw\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.018934 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.031267 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.031340 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.031410 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.032282 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qjfmb"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.124685 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.137685 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.137800 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.137938 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.158081 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.173696 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.255796 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2\") pod \"neutron-db-sync-qjfmb\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.349799 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-s58pz"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.353331 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.363056 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.374715 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nknfn" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.392557 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.396372 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.401492 4979 generic.go:334] "Generic (PLEG): container finished" podID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerID="c3c7777c050b25f075cd791441d8708f24d73c7a0fb4c8e23a52ac4d1a4fe2d9" exitCode=0 Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.402835 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" event={"ID":"9f00645b-b1f2-447f-b5a0-b38147768d8f","Type":"ContainerDied","Data":"c3c7777c050b25f075cd791441d8708f24d73c7a0fb4c8e23a52ac4d1a4fe2d9"} Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.405296 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.446218 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s58pz"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.459128 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qncf2\" (UniqueName: \"kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.459240 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.459292 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.459698 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.459780 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.466594 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.477521 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.506451 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cj64f"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.508658 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.521660 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cxc2m" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.522163 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.530893 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.538173 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cj64f"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.559602 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562058 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562170 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrndr\" (UniqueName: \"kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562240 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562276 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562327 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562362 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562386 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562422 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562456 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl944\" (UniqueName: \"kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562479 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562504 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562554 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qncf2\" (UniqueName: \"kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562582 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.562604 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.563336 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.566427 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.571707 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.572131 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.585727 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.592330 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.592439 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.592951 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.600969 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qncf2\" (UniqueName: \"kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2\") pod \"placement-db-sync-s58pz\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665574 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665633 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665670 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl944\" (UniqueName: \"kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665691 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665709 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665735 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665752 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665816 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665835 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrndr\" (UniqueName: \"kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665885 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665909 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665936 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665954 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.665977 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwcxd\" (UniqueName: \"kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.666001 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.670379 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.671906 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.672828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.673460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.685633 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.693527 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.693607 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.710310 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl944\" (UniqueName: \"kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944\") pod \"dnsmasq-dns-58dd9ff6bc-8lfxh\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.724247 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.727869 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrndr\" (UniqueName: \"kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr\") pod \"barbican-db-sync-cj64f\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.768607 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.768753 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.768780 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.768882 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.769071 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.769119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.769157 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwcxd\" (UniqueName: \"kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.775507 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.779333 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.782461 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.783625 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.791266 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.792404 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.796421 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwcxd\" (UniqueName: \"kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd\") pod \"ceilometer-0\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.905884 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.918174 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:16 crc kubenswrapper[4979]: I0130 22:02:16.957514 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj64f" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.105878 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300490 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xmst\" (UniqueName: \"kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300595 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300720 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300768 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300892 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.300940 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0\") pod \"9f00645b-b1f2-447f-b5a0-b38147768d8f\" (UID: \"9f00645b-b1f2-447f-b5a0-b38147768d8f\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.314746 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst" (OuterVolumeSpecName: "kube-api-access-2xmst") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "kube-api-access-2xmst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.345122 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9zrqq" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.398130 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.399948 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config" (OuterVolumeSpecName: "config") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.406289 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xmst\" (UniqueName: \"kubernetes.io/projected/9f00645b-b1f2-447f-b5a0-b38147768d8f-kube-api-access-2xmst\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.406455 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.406512 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.412115 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.415666 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.426084 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" event={"ID":"9f00645b-b1f2-447f-b5a0-b38147768d8f","Type":"ContainerDied","Data":"b6e379d574c063cdcbb91a31ed765e720bbacf97ae1abe1ce4c0719b7343efce"} Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.426157 4979 scope.go:117] "RemoveContainer" containerID="c3c7777c050b25f075cd791441d8708f24d73c7a0fb4c8e23a52ac4d1a4fe2d9" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.426330 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-ncb4v" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.442680 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f00645b-b1f2-447f-b5a0-b38147768d8f" (UID: "9f00645b-b1f2-447f-b5a0-b38147768d8f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.446092 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9zrqq" event={"ID":"023efd8e-7f0d-4ac5-80b3-db30dbb25905","Type":"ContainerDied","Data":"55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71"} Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.446130 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55b5006f7fda8bd62b72a6d40335c6fb3c575f2d4b32986af417e66bd7514d71" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.446191 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9zrqq" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.472548 4979 scope.go:117] "RemoveContainer" containerID="50c11c6ba1f573a9bebf130bbdbf73d94684ce5c18c6c0476848fec8b87a100e" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.506112 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-cf4cw"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.507998 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data\") pod \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508055 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle\") pod \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508124 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data\") pod \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508181 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jxks\" (UniqueName: \"kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks\") pod \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\" (UID: \"023efd8e-7f0d-4ac5-80b3-db30dbb25905\") " Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508568 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508583 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.508595 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f00645b-b1f2-447f-b5a0-b38147768d8f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.519964 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks" (OuterVolumeSpecName: "kube-api-access-4jxks") pod "023efd8e-7f0d-4ac5-80b3-db30dbb25905" (UID: "023efd8e-7f0d-4ac5-80b3-db30dbb25905"). InnerVolumeSpecName "kube-api-access-4jxks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.526077 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "023efd8e-7f0d-4ac5-80b3-db30dbb25905" (UID: "023efd8e-7f0d-4ac5-80b3-db30dbb25905"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.544518 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qjfmb"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.554691 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xq8ms"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.562116 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "023efd8e-7f0d-4ac5-80b3-db30dbb25905" (UID: "023efd8e-7f0d-4ac5-80b3-db30dbb25905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: W0130 22:02:17.570019 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80aa258c_fc1b_4379_8b50_ac89cb9b4568.slice/crio-aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3 WatchSource:0}: Error finding container aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3: Status 404 returned error can't find the container with id aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3 Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.586231 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:17 crc kubenswrapper[4979]: W0130 22:02:17.616258 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8481722d_b63c_4f8e_82e2_0960d719b46b.slice/crio-a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213 WatchSource:0}: Error finding container a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213: Status 404 returned error can't find the container with id a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213 Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.620210 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.620252 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.620270 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jxks\" (UniqueName: \"kubernetes.io/projected/023efd8e-7f0d-4ac5-80b3-db30dbb25905-kube-api-access-4jxks\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.703602 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s58pz"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.708912 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data" (OuterVolumeSpecName: "config-data") pod "023efd8e-7f0d-4ac5-80b3-db30dbb25905" (UID: "023efd8e-7f0d-4ac5-80b3-db30dbb25905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.742269 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023efd8e-7f0d-4ac5-80b3-db30dbb25905-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.754510 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.768568 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:17 crc kubenswrapper[4979]: W0130 22:02:17.773268 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6043875b_c6a4_4cbd_919e_79a61239eaa6.slice/crio-6a2d854ec1dbd82bcfa5a4f7a9ec2e600f535da300f6990faf526c3822b41bfd WatchSource:0}: Error finding container 6a2d854ec1dbd82bcfa5a4f7a9ec2e600f535da300f6990faf526c3822b41bfd: Status 404 returned error can't find the container with id 6a2d854ec1dbd82bcfa5a4f7a9ec2e600f535da300f6990faf526c3822b41bfd Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.849436 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cj64f"] Jan 30 22:02:17 crc kubenswrapper[4979]: W0130 22:02:17.871268 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79723cfd_4e3c_446c_bdf1_5c2c997950a8.slice/crio-ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e WatchSource:0}: Error finding container ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e: Status 404 returned error can't find the container with id ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.983906 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:17 crc kubenswrapper[4979]: I0130 22:02:17.991609 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-ncb4v"] Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.260249 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.467363 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xq8ms" event={"ID":"a6e395ca-523e-41fa-99e7-54a7926bae7b","Type":"ContainerStarted","Data":"f22a7e6623c93c4cc030d6b80af43c0a3dcf98b20f173cb5007da0a5eae591f9"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.467451 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xq8ms" event={"ID":"a6e395ca-523e-41fa-99e7-54a7926bae7b","Type":"ContainerStarted","Data":"9c6eba33d3f0c4b1f4edf70e3d95c55f24ea5e1f25cb0716ba0a75705be5252d"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.477527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s58pz" event={"ID":"9c59f1f7-caf7-4ab4-b405-dbf27330ff37","Type":"ContainerStarted","Data":"386d53c83a51fa8ebf1662105890a6cd9dd37690f36cb6bac7142c9df6dc4505"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.482745 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerStarted","Data":"6a2d854ec1dbd82bcfa5a4f7a9ec2e600f535da300f6990faf526c3822b41bfd"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.501105 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xq8ms" podStartSLOduration=3.50108342 podStartE2EDuration="3.50108342s" podCreationTimestamp="2026-01-30 22:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:18.497376692 +0000 UTC m=+1334.458623745" watchObservedRunningTime="2026-01-30 22:02:18.50108342 +0000 UTC m=+1334.462330453" Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.502429 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj64f" event={"ID":"79723cfd-4e3c-446c-bdf1-5c2c997950a8","Type":"ContainerStarted","Data":"ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.510390 4979 generic.go:334] "Generic (PLEG): container finished" podID="fee781fe-922e-4053-a318-02f409afb0a4" containerID="66bd742e325dd7cacecdec1b82cf32a7698ec617add172b382f4d11ff21b5756" exitCode=0 Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.510634 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" event={"ID":"fee781fe-922e-4053-a318-02f409afb0a4","Type":"ContainerDied","Data":"66bd742e325dd7cacecdec1b82cf32a7698ec617add172b382f4d11ff21b5756"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.510675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" event={"ID":"fee781fe-922e-4053-a318-02f409afb0a4","Type":"ContainerStarted","Data":"8bf4c071c0668d71e79b98c441bfd48214eb848e83591dafb62efd7aedf4343c"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.526777 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjfmb" event={"ID":"8481722d-b63c-4f8e-82e2-0960d719b46b","Type":"ContainerStarted","Data":"d89396dba43eda148feb03a8bfaa17357461f4fc9b9261374a3239bcbd38441a"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.526905 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjfmb" event={"ID":"8481722d-b63c-4f8e-82e2-0960d719b46b","Type":"ContainerStarted","Data":"a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.534373 4979 generic.go:334] "Generic (PLEG): container finished" podID="134a82db-d55c-4764-86d1-62146b42583f" containerID="7db1b8115ca37505061c796c7d7fb618ffe09453de2ac94daa33d9b28697993f" exitCode=0 Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.534646 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" event={"ID":"134a82db-d55c-4764-86d1-62146b42583f","Type":"ContainerDied","Data":"7db1b8115ca37505061c796c7d7fb618ffe09453de2ac94daa33d9b28697993f"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.534716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" event={"ID":"134a82db-d55c-4764-86d1-62146b42583f","Type":"ContainerStarted","Data":"6faa7501d297ac13cb24bec157535442206f4913e8b55307e20761f154eb1a60"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.541105 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cf4cw" event={"ID":"80aa258c-fc1b-4379-8b50-ac89cb9b4568","Type":"ContainerStarted","Data":"aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3"} Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.627004 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qjfmb" podStartSLOduration=3.626980517 podStartE2EDuration="3.626980517s" podCreationTimestamp="2026-01-30 22:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:18.606477962 +0000 UTC m=+1334.567724995" watchObservedRunningTime="2026-01-30 22:02:18.626980517 +0000 UTC m=+1334.588227540" Jan 30 22:02:18 crc kubenswrapper[4979]: I0130 22:02:18.966010 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.106464 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" path="/var/lib/kubelet/pods/9f00645b-b1f2-447f-b5a0-b38147768d8f/volumes" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.107330 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:02:19 crc kubenswrapper[4979]: E0130 22:02:19.107783 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" containerName="glance-db-sync" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.107808 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" containerName="glance-db-sync" Jan 30 22:02:19 crc kubenswrapper[4979]: E0130 22:02:19.107848 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="dnsmasq-dns" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.107857 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="dnsmasq-dns" Jan 30 22:02:19 crc kubenswrapper[4979]: E0130 22:02:19.107891 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="init" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.107900 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="init" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.112297 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f00645b-b1f2-447f-b5a0-b38147768d8f" containerName="dnsmasq-dns" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.112391 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" containerName="glance-db-sync" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.114066 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.125835 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.286342 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.286838 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l7jp\" (UniqueName: \"kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.286946 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.286980 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.287005 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.287051 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.340886 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.389890 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.389986 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.390060 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.390105 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.390267 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.390346 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l7jp\" (UniqueName: \"kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.392415 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.393366 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.394750 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.399379 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.400250 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.428518 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l7jp\" (UniqueName: \"kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp\") pod \"dnsmasq-dns-785d8bcb8c-plpcc\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.493852 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.493952 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxp7g\" (UniqueName: \"kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.494146 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.494297 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.494555 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.494634 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc\") pod \"134a82db-d55c-4764-86d1-62146b42583f\" (UID: \"134a82db-d55c-4764-86d1-62146b42583f\") " Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.519674 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g" (OuterVolumeSpecName: "kube-api-access-lxp7g") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "kube-api-access-lxp7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.538412 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.539069 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.542853 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config" (OuterVolumeSpecName: "config") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.542912 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.567549 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.577019 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "134a82db-d55c-4764-86d1-62146b42583f" (UID: "134a82db-d55c-4764-86d1-62146b42583f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598696 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598785 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598845 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598862 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxp7g\" (UniqueName: \"kubernetes.io/projected/134a82db-d55c-4764-86d1-62146b42583f-kube-api-access-lxp7g\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598876 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.598901 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134a82db-d55c-4764-86d1-62146b42583f-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.607447 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" event={"ID":"fee781fe-922e-4053-a318-02f409afb0a4","Type":"ContainerStarted","Data":"a9cb30b4fcff28b3c31cd85ba96bf4b226be94a4287511e3bf51d389091357fe"} Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.607624 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="dnsmasq-dns" containerID="cri-o://a9cb30b4fcff28b3c31cd85ba96bf4b226be94a4287511e3bf51d389091357fe" gracePeriod=10 Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.607903 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.618781 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" event={"ID":"134a82db-d55c-4764-86d1-62146b42583f","Type":"ContainerDied","Data":"6faa7501d297ac13cb24bec157535442206f4913e8b55307e20761f154eb1a60"} Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.618858 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-82ngt" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.618892 4979 scope.go:117] "RemoveContainer" containerID="7db1b8115ca37505061c796c7d7fb618ffe09453de2ac94daa33d9b28697993f" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.657482 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" podStartSLOduration=3.657448741 podStartE2EDuration="3.657448741s" podCreationTimestamp="2026-01-30 22:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:19.643897811 +0000 UTC m=+1335.605144854" watchObservedRunningTime="2026-01-30 22:02:19.657448741 +0000 UTC m=+1335.618695774" Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.927247 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:19 crc kubenswrapper[4979]: I0130 22:02:19.953222 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-82ngt"] Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.083441 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:20 crc kubenswrapper[4979]: E0130 22:02:20.084121 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134a82db-d55c-4764-86d1-62146b42583f" containerName="init" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.084147 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="134a82db-d55c-4764-86d1-62146b42583f" containerName="init" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.084378 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="134a82db-d55c-4764-86d1-62146b42583f" containerName="init" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.085516 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.090392 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.093730 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.100147 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tzvjb" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.107382 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220231 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220347 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220425 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220452 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220515 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w54d\" (UniqueName: \"kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220783 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.220909 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.271345 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.309850 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.311522 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.315293 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.322830 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.322881 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.322909 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.322929 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.322956 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w54d\" (UniqueName: \"kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.323066 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.323089 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.323581 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.323782 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.329126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.334056 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.336614 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.343251 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.357473 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.362771 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w54d\" (UniqueName: \"kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.396992 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.424452 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425363 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd9hw\" (UniqueName: \"kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425409 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425452 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425468 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425499 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425562 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.425240 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527366 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd9hw\" (UniqueName: \"kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527455 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527517 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527550 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527644 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.527669 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.528530 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.529424 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.529777 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.538616 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.544606 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.550828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd9hw\" (UniqueName: \"kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.561073 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.561995 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.646486 4979 generic.go:334] "Generic (PLEG): container finished" podID="fee781fe-922e-4053-a318-02f409afb0a4" containerID="a9cb30b4fcff28b3c31cd85ba96bf4b226be94a4287511e3bf51d389091357fe" exitCode=0 Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.646606 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" event={"ID":"fee781fe-922e-4053-a318-02f409afb0a4","Type":"ContainerDied","Data":"a9cb30b4fcff28b3c31cd85ba96bf4b226be94a4287511e3bf51d389091357fe"} Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.650243 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" event={"ID":"734e25b4-90d2-466b-a71d-029b7fd4b491","Type":"ContainerStarted","Data":"0bbffd435fbf3836f4de2a4551e90534d72d8f16d6de3150a0817077872230f4"} Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.758186 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:20 crc kubenswrapper[4979]: I0130 22:02:20.941777 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.045797 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl944\" (UniqueName: \"kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.045947 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.045976 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.046071 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.046100 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.046184 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb\") pod \"fee781fe-922e-4053-a318-02f409afb0a4\" (UID: \"fee781fe-922e-4053-a318-02f409afb0a4\") " Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.060358 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944" (OuterVolumeSpecName: "kube-api-access-bl944") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "kube-api-access-bl944". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.095899 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134a82db-d55c-4764-86d1-62146b42583f" path="/var/lib/kubelet/pods/134a82db-d55c-4764-86d1-62146b42583f/volumes" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.142833 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.148726 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.148763 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl944\" (UniqueName: \"kubernetes.io/projected/fee781fe-922e-4053-a318-02f409afb0a4-kube-api-access-bl944\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.165982 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.176097 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.188617 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.208468 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config" (OuterVolumeSpecName: "config") pod "fee781fe-922e-4053-a318-02f409afb0a4" (UID: "fee781fe-922e-4053-a318-02f409afb0a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.255160 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.255224 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.255245 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.255255 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee781fe-922e-4053-a318-02f409afb0a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.264696 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.602266 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:21 crc kubenswrapper[4979]: W0130 22:02:21.609795 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57847e36_4024_4fcd_a141_ac9bac71a969.slice/crio-5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a WatchSource:0}: Error finding container 5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a: Status 404 returned error can't find the container with id 5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.672330 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerStarted","Data":"5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a"} Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.676996 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" event={"ID":"fee781fe-922e-4053-a318-02f409afb0a4","Type":"ContainerDied","Data":"8bf4c071c0668d71e79b98c441bfd48214eb848e83591dafb62efd7aedf4343c"} Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.677086 4979 scope.go:117] "RemoveContainer" containerID="a9cb30b4fcff28b3c31cd85ba96bf4b226be94a4287511e3bf51d389091357fe" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.677245 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-8lfxh" Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.699606 4979 generic.go:334] "Generic (PLEG): container finished" podID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerID="a84e16cda693df587eff75844a45206ef87069920f6876c4a2c9eb4f7fae9fbe" exitCode=0 Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.700177 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" event={"ID":"734e25b4-90d2-466b-a71d-029b7fd4b491","Type":"ContainerDied","Data":"a84e16cda693df587eff75844a45206ef87069920f6876c4a2c9eb4f7fae9fbe"} Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.718671 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerStarted","Data":"0235a45d441f3c8cb176ee3ba5d7b3a592e383ad0a2a9bc2db0b906132b62220"} Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.772219 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.794115 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-8lfxh"] Jan 30 22:02:21 crc kubenswrapper[4979]: I0130 22:02:21.807975 4979 scope.go:117] "RemoveContainer" containerID="66bd742e325dd7cacecdec1b82cf32a7698ec617add172b382f4d11ff21b5756" Jan 30 22:02:22 crc kubenswrapper[4979]: I0130 22:02:22.760506 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" event={"ID":"734e25b4-90d2-466b-a71d-029b7fd4b491","Type":"ContainerStarted","Data":"1d1c26d6f08b899fc938d9e9e56bd49d29a4055ed2b289e8b5b646f2046dec68"} Jan 30 22:02:22 crc kubenswrapper[4979]: I0130 22:02:22.760996 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:22 crc kubenswrapper[4979]: I0130 22:02:22.779364 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerStarted","Data":"5b9ff4962b9217f70b58001e9bfc2d7fc1de3d24c309f129fa4adf95454b0c31"} Jan 30 22:02:22 crc kubenswrapper[4979]: I0130 22:02:22.797649 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" podStartSLOduration=3.79762184 podStartE2EDuration="3.79762184s" podCreationTimestamp="2026-01-30 22:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:22.788984011 +0000 UTC m=+1338.750231044" watchObservedRunningTime="2026-01-30 22:02:22.79762184 +0000 UTC m=+1338.758868873" Jan 30 22:02:23 crc kubenswrapper[4979]: I0130 22:02:23.088494 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee781fe-922e-4053-a318-02f409afb0a4" path="/var/lib/kubelet/pods/fee781fe-922e-4053-a318-02f409afb0a4/volumes" Jan 30 22:02:23 crc kubenswrapper[4979]: I0130 22:02:23.820144 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerStarted","Data":"d520027c614971a5476bc85d82647b2a7ab50c259d5e0da522c82c672fbac675"} Jan 30 22:02:23 crc kubenswrapper[4979]: I0130 22:02:23.825389 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerStarted","Data":"11f2966357f7757e1c5ff42bbe596d8aabdfc99c75e4e411aa7571267254b305"} Jan 30 22:02:23 crc kubenswrapper[4979]: I0130 22:02:23.858821 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.858800051 podStartE2EDuration="4.858800051s" podCreationTimestamp="2026-01-30 22:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:23.842542508 +0000 UTC m=+1339.803789551" watchObservedRunningTime="2026-01-30 22:02:23.858800051 +0000 UTC m=+1339.820047074" Jan 30 22:02:24 crc kubenswrapper[4979]: I0130 22:02:24.838680 4979 generic.go:334] "Generic (PLEG): container finished" podID="a6e395ca-523e-41fa-99e7-54a7926bae7b" containerID="f22a7e6623c93c4cc030d6b80af43c0a3dcf98b20f173cb5007da0a5eae591f9" exitCode=0 Jan 30 22:02:24 crc kubenswrapper[4979]: I0130 22:02:24.838789 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xq8ms" event={"ID":"a6e395ca-523e-41fa-99e7-54a7926bae7b","Type":"ContainerDied","Data":"f22a7e6623c93c4cc030d6b80af43c0a3dcf98b20f173cb5007da0a5eae591f9"} Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.136537 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.144465 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-log" containerID="cri-o://5b9ff4962b9217f70b58001e9bfc2d7fc1de3d24c309f129fa4adf95454b0c31" gracePeriod=30 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.144647 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-httpd" containerID="cri-o://d520027c614971a5476bc85d82647b2a7ab50c259d5e0da522c82c672fbac675" gracePeriod=30 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.265556 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.881038 4979 generic.go:334] "Generic (PLEG): container finished" podID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerID="d520027c614971a5476bc85d82647b2a7ab50c259d5e0da522c82c672fbac675" exitCode=0 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.881423 4979 generic.go:334] "Generic (PLEG): container finished" podID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerID="5b9ff4962b9217f70b58001e9bfc2d7fc1de3d24c309f129fa4adf95454b0c31" exitCode=143 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.881084 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerDied","Data":"d520027c614971a5476bc85d82647b2a7ab50c259d5e0da522c82c672fbac675"} Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.881532 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerDied","Data":"5b9ff4962b9217f70b58001e9bfc2d7fc1de3d24c309f129fa4adf95454b0c31"} Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.884836 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerStarted","Data":"68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747"} Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.885075 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-httpd" containerID="cri-o://68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747" gracePeriod=30 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.885052 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-log" containerID="cri-o://11f2966357f7757e1c5ff42bbe596d8aabdfc99c75e4e411aa7571267254b305" gracePeriod=30 Jan 30 22:02:26 crc kubenswrapper[4979]: I0130 22:02:26.923589 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.923565115 podStartE2EDuration="7.923565115s" podCreationTimestamp="2026-01-30 22:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:26.91360874 +0000 UTC m=+1342.874855783" watchObservedRunningTime="2026-01-30 22:02:26.923565115 +0000 UTC m=+1342.884812148" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.052681 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.120736 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4jcz\" (UniqueName: \"kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.121583 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.121695 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.121839 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.122010 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.122141 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data\") pod \"a6e395ca-523e-41fa-99e7-54a7926bae7b\" (UID: \"a6e395ca-523e-41fa-99e7-54a7926bae7b\") " Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.140453 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: E0130 22:02:27.150669 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57847e36_4024_4fcd_a141_ac9bac71a969.slice/crio-68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.156248 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts" (OuterVolumeSpecName: "scripts") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.156426 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz" (OuterVolumeSpecName: "kube-api-access-h4jcz") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "kube-api-access-h4jcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.158264 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.171051 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data" (OuterVolumeSpecName: "config-data") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.204007 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6e395ca-523e-41fa-99e7-54a7926bae7b" (UID: "a6e395ca-523e-41fa-99e7-54a7926bae7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226390 4979 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226451 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226463 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226505 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4jcz\" (UniqueName: \"kubernetes.io/projected/a6e395ca-523e-41fa-99e7-54a7926bae7b-kube-api-access-h4jcz\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226516 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.226526 4979 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6e395ca-523e-41fa-99e7-54a7926bae7b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.900667 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xq8ms" event={"ID":"a6e395ca-523e-41fa-99e7-54a7926bae7b","Type":"ContainerDied","Data":"9c6eba33d3f0c4b1f4edf70e3d95c55f24ea5e1f25cb0716ba0a75705be5252d"} Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.900724 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6eba33d3f0c4b1f4edf70e3d95c55f24ea5e1f25cb0716ba0a75705be5252d" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.900728 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xq8ms" Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.904078 4979 generic.go:334] "Generic (PLEG): container finished" podID="57847e36-4024-4fcd-a141-ac9bac71a969" containerID="68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747" exitCode=0 Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.904122 4979 generic.go:334] "Generic (PLEG): container finished" podID="57847e36-4024-4fcd-a141-ac9bac71a969" containerID="11f2966357f7757e1c5ff42bbe596d8aabdfc99c75e4e411aa7571267254b305" exitCode=143 Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.904162 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerDied","Data":"68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747"} Jan 30 22:02:27 crc kubenswrapper[4979]: I0130 22:02:27.904228 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerDied","Data":"11f2966357f7757e1c5ff42bbe596d8aabdfc99c75e4e411aa7571267254b305"} Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.262019 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xq8ms"] Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.275634 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xq8ms"] Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.343304 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dmn2z"] Jan 30 22:02:28 crc kubenswrapper[4979]: E0130 22:02:28.343912 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="dnsmasq-dns" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.343928 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="dnsmasq-dns" Jan 30 22:02:28 crc kubenswrapper[4979]: E0130 22:02:28.343952 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6e395ca-523e-41fa-99e7-54a7926bae7b" containerName="keystone-bootstrap" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.343960 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6e395ca-523e-41fa-99e7-54a7926bae7b" containerName="keystone-bootstrap" Jan 30 22:02:28 crc kubenswrapper[4979]: E0130 22:02:28.343980 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="init" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.343988 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="init" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.344242 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6e395ca-523e-41fa-99e7-54a7926bae7b" containerName="keystone-bootstrap" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.344267 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee781fe-922e-4053-a318-02f409afb0a4" containerName="dnsmasq-dns" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.345064 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.348016 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dx6hv" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.348264 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.348998 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.349196 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.349363 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.415178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dmn2z"] Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.503919 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.504395 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.504452 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.504558 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xcf8\" (UniqueName: \"kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.504726 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.504761 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606317 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606400 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606433 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606471 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606562 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xcf8\" (UniqueName: \"kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.606632 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.612849 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.613321 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.613515 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.615436 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.626066 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.629110 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xcf8\" (UniqueName: \"kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8\") pod \"keystone-bootstrap-dmn2z\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:28 crc kubenswrapper[4979]: I0130 22:02:28.718085 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:29 crc kubenswrapper[4979]: I0130 22:02:29.096129 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6e395ca-523e-41fa-99e7-54a7926bae7b" path="/var/lib/kubelet/pods/a6e395ca-523e-41fa-99e7-54a7926bae7b/volumes" Jan 30 22:02:29 crc kubenswrapper[4979]: I0130 22:02:29.573610 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:02:29 crc kubenswrapper[4979]: I0130 22:02:29.654782 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:02:29 crc kubenswrapper[4979]: I0130 22:02:29.655675 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-t86qb" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" containerID="cri-o://11b12b8a1042240e01cbd94aefdd223922da5bf565812f8e936ee2b92328c29b" gracePeriod=10 Jan 30 22:02:31 crc kubenswrapper[4979]: I0130 22:02:31.952346 4979 generic.go:334] "Generic (PLEG): container finished" podID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerID="11b12b8a1042240e01cbd94aefdd223922da5bf565812f8e936ee2b92328c29b" exitCode=0 Jan 30 22:02:31 crc kubenswrapper[4979]: I0130 22:02:31.952562 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t86qb" event={"ID":"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4","Type":"ContainerDied","Data":"11b12b8a1042240e01cbd94aefdd223922da5bf565812f8e936ee2b92328c29b"} Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.039873 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.039963 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.040122 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.041052 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.041121 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c" gracePeriod=600 Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.966983 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c" exitCode=0 Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.967110 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c"} Jan 30 22:02:32 crc kubenswrapper[4979]: I0130 22:02:32.967156 4979 scope.go:117] "RemoveContainer" containerID="d09f2b9fb9e70c284933384af86903d057bc10cc69d7514572c72f1e0e4710ff" Jan 30 22:02:33 crc kubenswrapper[4979]: E0130 22:02:33.507132 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 30 22:02:33 crc kubenswrapper[4979]: E0130 22:02:33.507817 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qncf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-s58pz_openstack(9c59f1f7-caf7-4ab4-b405-dbf27330ff37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:02:33 crc kubenswrapper[4979]: E0130 22:02:33.509257 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-s58pz" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" Jan 30 22:02:33 crc kubenswrapper[4979]: E0130 22:02:33.977564 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-s58pz" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" Jan 30 22:02:34 crc kubenswrapper[4979]: I0130 22:02:34.212558 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-t86qb" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Jan 30 22:02:34 crc kubenswrapper[4979]: E0130 22:02:34.417585 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 22:02:34 crc kubenswrapper[4979]: E0130 22:02:34.417792 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrndr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cj64f_openstack(79723cfd-4e3c-446c-bdf1-5c2c997950a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:02:34 crc kubenswrapper[4979]: E0130 22:02:34.419813 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cj64f" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" Jan 30 22:02:34 crc kubenswrapper[4979]: E0130 22:02:34.990096 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-cj64f" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" Jan 30 22:02:44 crc kubenswrapper[4979]: I0130 22:02:44.213278 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-t86qb" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.104410 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t86qb" event={"ID":"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4","Type":"ContainerDied","Data":"9e5bb560297f4e0e8f2115f8c48331514e53ce9d31d3b53377b9d219de77d2e7"} Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.104909 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e5bb560297f4e0e8f2115f8c48331514e53ce9d31d3b53377b9d219de77d2e7" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.109170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57847e36-4024-4fcd-a141-ac9bac71a969","Type":"ContainerDied","Data":"5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a"} Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.109202 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c90e488031249ab7efe84e3e6cf990428a66c216f8b851ce1cf579acae29d8a" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.120783 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.129621 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.200974 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.201116 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.201187 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd9hw\" (UniqueName: \"kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.201798 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs" (OuterVolumeSpecName: "logs") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202182 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202225 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjdnk\" (UniqueName: \"kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk\") pod \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202297 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202354 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc\") pod \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202412 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202530 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb\") pod \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202585 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run\") pod \"57847e36-4024-4fcd-a141-ac9bac71a969\" (UID: \"57847e36-4024-4fcd-a141-ac9bac71a969\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202685 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb\") pod \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.202726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config\") pod \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\" (UID: \"ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4\") " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.203421 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.203446 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.217279 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.217411 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw" (OuterVolumeSpecName: "kube-api-access-kd9hw") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "kube-api-access-kd9hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.219255 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk" (OuterVolumeSpecName: "kube-api-access-gjdnk") pod "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" (UID: "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4"). InnerVolumeSpecName "kube-api-access-gjdnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.231640 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts" (OuterVolumeSpecName: "scripts") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.269801 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config" (OuterVolumeSpecName: "config") pod "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" (UID: "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.271668 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" (UID: "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.292397 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.295134 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" (UID: "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.297474 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data" (OuterVolumeSpecName: "config-data") pod "57847e36-4024-4fcd-a141-ac9bac71a969" (UID: "57847e36-4024-4fcd-a141-ac9bac71a969"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306294 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306334 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306345 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306355 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd9hw\" (UniqueName: \"kubernetes.io/projected/57847e36-4024-4fcd-a141-ac9bac71a969-kube-api-access-kd9hw\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306369 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306382 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjdnk\" (UniqueName: \"kubernetes.io/projected/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-kube-api-access-gjdnk\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306392 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57847e36-4024-4fcd-a141-ac9bac71a969-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306425 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306435 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.306445 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57847e36-4024-4fcd-a141-ac9bac71a969-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.309048 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" (UID: "ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.325724 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.408078 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:45 crc kubenswrapper[4979]: I0130 22:02:45.408120 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.119067 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.120218 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t86qb" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.170407 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.198745 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.211080 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.231081 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t86qb"] Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247163 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.247654 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-log" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247675 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-log" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.247703 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-httpd" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247712 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-httpd" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.247740 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="init" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247748 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="init" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.247760 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247766 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247948 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247964 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-log" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.247976 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" containerName="glance-httpd" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.249375 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.254277 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.258609 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.259429 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.324826 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.324900 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325118 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdgd\" (UniqueName: \"kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325205 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325244 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325268 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325334 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.325360 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427246 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427356 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdgd\" (UniqueName: \"kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427429 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427466 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427493 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427569 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427599 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427806 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427837 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.427889 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.428023 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.436582 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.436906 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.437468 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.440428 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.449731 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdgd\" (UniqueName: \"kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.469304 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.528088 4979 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.528267 4979 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njts7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-cf4cw_openstack(80aa258c-fc1b-4379-8b50-ac89cb9b4568): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:02:46 crc kubenswrapper[4979]: E0130 22:02:46.530001 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-cf4cw" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.571883 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.704090 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.840841 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841440 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841524 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841553 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841652 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w54d\" (UniqueName: \"kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.841812 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run\") pod \"3d906f2e-2930-4b79-adf3-1367943b9a75\" (UID: \"3d906f2e-2930-4b79-adf3-1367943b9a75\") " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.842951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.844005 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs" (OuterVolumeSpecName: "logs") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.849729 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts" (OuterVolumeSpecName: "scripts") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.850358 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.853698 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d" (OuterVolumeSpecName: "kube-api-access-2w54d") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "kube-api-access-2w54d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.881362 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.934429 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data" (OuterVolumeSpecName: "config-data") pod "3d906f2e-2930-4b79-adf3-1367943b9a75" (UID: "3d906f2e-2930-4b79-adf3-1367943b9a75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.949927 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.949967 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.949983 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.950106 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.950130 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d906f2e-2930-4b79-adf3-1367943b9a75-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.950145 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w54d\" (UniqueName: \"kubernetes.io/projected/3d906f2e-2930-4b79-adf3-1367943b9a75-kube-api-access-2w54d\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.950157 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d906f2e-2930-4b79-adf3-1367943b9a75-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.982299 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 22:02:46 crc kubenswrapper[4979]: I0130 22:02:46.986064 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dmn2z"] Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.052321 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.100621 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57847e36-4024-4fcd-a141-ac9bac71a969" path="/var/lib/kubelet/pods/57847e36-4024-4fcd-a141-ac9bac71a969/volumes" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.102091 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" path="/var/lib/kubelet/pods/ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4/volumes" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.133530 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d906f2e-2930-4b79-adf3-1367943b9a75","Type":"ContainerDied","Data":"0235a45d441f3c8cb176ee3ba5d7b3a592e383ad0a2a9bc2db0b906132b62220"} Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.134012 4979 scope.go:117] "RemoveContainer" containerID="d520027c614971a5476bc85d82647b2a7ab50c259d5e0da522c82c672fbac675" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.133611 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.135904 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerStarted","Data":"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122"} Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.141298 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c"} Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.146134 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dmn2z" event={"ID":"9686aad4-f2a7-4878-ae8b-f6142e93703a","Type":"ContainerStarted","Data":"95d8aa47cbce3a638a3e8c22804badd17f638cf4879d004b591bbbd61ab25324"} Jan 30 22:02:47 crc kubenswrapper[4979]: E0130 22:02:47.148301 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-cf4cw" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.173324 4979 scope.go:117] "RemoveContainer" containerID="5b9ff4962b9217f70b58001e9bfc2d7fc1de3d24c309f129fa4adf95454b0c31" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.199910 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.216404 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.228500 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:47 crc kubenswrapper[4979]: E0130 22:02:47.229222 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-httpd" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.229250 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-httpd" Jan 30 22:02:47 crc kubenswrapper[4979]: E0130 22:02:47.229289 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-log" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.229298 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-log" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.229535 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-log" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.229565 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" containerName="glance-httpd" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.231841 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.236104 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.236478 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.255506 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360754 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj67s\" (UniqueName: \"kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360833 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360882 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360923 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360954 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.360998 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.361060 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.361148 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: W0130 22:02:47.367830 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e002e48_1108_41f0_a1de_5a6b89d9e534.slice/crio-deeb60df8742bac120d13441d56bc1b6e0ead1fd468b98aebf5923cd40c71e08 WatchSource:0}: Error finding container deeb60df8742bac120d13441d56bc1b6e0ead1fd468b98aebf5923cd40c71e08: Status 404 returned error can't find the container with id deeb60df8742bac120d13441d56bc1b6e0ead1fd468b98aebf5923cd40c71e08 Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.368268 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.462922 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463072 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj67s\" (UniqueName: \"kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463109 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463146 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463189 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463216 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463258 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.463302 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.464501 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.464727 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.465069 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.471167 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.471744 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.477792 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.479717 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.503618 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj67s\" (UniqueName: \"kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.511590 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " pod="openstack/glance-default-external-api-0" Jan 30 22:02:47 crc kubenswrapper[4979]: I0130 22:02:47.559636 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.167784 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dmn2z" event={"ID":"9686aad4-f2a7-4878-ae8b-f6142e93703a","Type":"ContainerStarted","Data":"7f05f0be617476aee0f02ee8e76e53920df42776411e8ddeff1d11ffb5f9be89"} Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.172892 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s58pz" event={"ID":"9c59f1f7-caf7-4ab4-b405-dbf27330ff37","Type":"ContainerStarted","Data":"240dc00562487f4f79338fb7476cc903b5a593732bc0312e48d962f852dc3eeb"} Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.175864 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerStarted","Data":"7f78fdfb980e393a32d3e4e14baa1b2c7a2c7e241035d08dc24473d3ebce5a53"} Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.175923 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerStarted","Data":"deeb60df8742bac120d13441d56bc1b6e0ead1fd468b98aebf5923cd40c71e08"} Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.207764 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.222706 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dmn2z" podStartSLOduration=20.222673912 podStartE2EDuration="20.222673912s" podCreationTimestamp="2026-01-30 22:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:48.186836819 +0000 UTC m=+1364.148083852" watchObservedRunningTime="2026-01-30 22:02:48.222673912 +0000 UTC m=+1364.183920945" Jan 30 22:02:48 crc kubenswrapper[4979]: I0130 22:02:48.224022 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-s58pz" podStartSLOduration=2.017001483 podStartE2EDuration="32.224012938s" podCreationTimestamp="2026-01-30 22:02:16 +0000 UTC" firstStartedPulling="2026-01-30 22:02:17.634882834 +0000 UTC m=+1333.596129867" lastFinishedPulling="2026-01-30 22:02:47.841894279 +0000 UTC m=+1363.803141322" observedRunningTime="2026-01-30 22:02:48.207659042 +0000 UTC m=+1364.168906075" watchObservedRunningTime="2026-01-30 22:02:48.224012938 +0000 UTC m=+1364.185259971" Jan 30 22:02:49 crc kubenswrapper[4979]: I0130 22:02:49.089810 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d906f2e-2930-4b79-adf3-1367943b9a75" path="/var/lib/kubelet/pods/3d906f2e-2930-4b79-adf3-1367943b9a75/volumes" Jan 30 22:02:49 crc kubenswrapper[4979]: I0130 22:02:49.211003 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerStarted","Data":"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d"} Jan 30 22:02:49 crc kubenswrapper[4979]: I0130 22:02:49.215375 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-t86qb" podUID="ff9a0ca8-b02c-4842-b2d3-d0fddf4bd8f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Jan 30 22:02:49 crc kubenswrapper[4979]: I0130 22:02:49.232804 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerStarted","Data":"3e810c936e02f2844b80a87456dccb9adbb5f44faaa30ddef373326002018cd3"} Jan 30 22:02:50 crc kubenswrapper[4979]: I0130 22:02:50.290860 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerStarted","Data":"24204e17d4c44358eb3ce3054f01712860fc845201cf5a59bbd0c9532f6409e6"} Jan 30 22:02:50 crc kubenswrapper[4979]: I0130 22:02:50.306231 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerStarted","Data":"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd"} Jan 30 22:02:50 crc kubenswrapper[4979]: I0130 22:02:50.306595 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerStarted","Data":"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d"} Jan 30 22:02:50 crc kubenswrapper[4979]: I0130 22:02:50.337519 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.3374959220000004 podStartE2EDuration="4.337495922s" podCreationTimestamp="2026-01-30 22:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:50.324308712 +0000 UTC m=+1366.285555775" watchObservedRunningTime="2026-01-30 22:02:50.337495922 +0000 UTC m=+1366.298742955" Jan 30 22:02:52 crc kubenswrapper[4979]: I0130 22:02:52.328967 4979 generic.go:334] "Generic (PLEG): container finished" podID="9686aad4-f2a7-4878-ae8b-f6142e93703a" containerID="7f05f0be617476aee0f02ee8e76e53920df42776411e8ddeff1d11ffb5f9be89" exitCode=0 Jan 30 22:02:52 crc kubenswrapper[4979]: I0130 22:02:52.329333 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dmn2z" event={"ID":"9686aad4-f2a7-4878-ae8b-f6142e93703a","Type":"ContainerDied","Data":"7f05f0be617476aee0f02ee8e76e53920df42776411e8ddeff1d11ffb5f9be89"} Jan 30 22:02:52 crc kubenswrapper[4979]: I0130 22:02:52.330574 4979 generic.go:334] "Generic (PLEG): container finished" podID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" containerID="240dc00562487f4f79338fb7476cc903b5a593732bc0312e48d962f852dc3eeb" exitCode=0 Jan 30 22:02:52 crc kubenswrapper[4979]: I0130 22:02:52.330604 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s58pz" event={"ID":"9c59f1f7-caf7-4ab4-b405-dbf27330ff37","Type":"ContainerDied","Data":"240dc00562487f4f79338fb7476cc903b5a593732bc0312e48d962f852dc3eeb"} Jan 30 22:02:52 crc kubenswrapper[4979]: I0130 22:02:52.360213 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.360191514 podStartE2EDuration="5.360191514s" podCreationTimestamp="2026-01-30 22:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:02:50.356119947 +0000 UTC m=+1366.317367010" watchObservedRunningTime="2026-01-30 22:02:52.360191514 +0000 UTC m=+1368.321438537" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.376955 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s58pz" event={"ID":"9c59f1f7-caf7-4ab4-b405-dbf27330ff37","Type":"ContainerDied","Data":"386d53c83a51fa8ebf1662105890a6cd9dd37690f36cb6bac7142c9df6dc4505"} Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.377755 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="386d53c83a51fa8ebf1662105890a6cd9dd37690f36cb6bac7142c9df6dc4505" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.380257 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dmn2z" event={"ID":"9686aad4-f2a7-4878-ae8b-f6142e93703a","Type":"ContainerDied","Data":"95d8aa47cbce3a638a3e8c22804badd17f638cf4879d004b591bbbd61ab25324"} Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.380304 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d8aa47cbce3a638a3e8c22804badd17f638cf4879d004b591bbbd61ab25324" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.407557 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.456581 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.572267 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.572349 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.573288 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts\") pod \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.573464 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.573527 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xcf8\" (UniqueName: \"kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.573562 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574528 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle\") pod \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574626 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574655 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data\") pod \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574790 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qncf2\" (UniqueName: \"kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2\") pod \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574845 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574905 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys\") pod \"9686aad4-f2a7-4878-ae8b-f6142e93703a\" (UID: \"9686aad4-f2a7-4878-ae8b-f6142e93703a\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.574984 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs\") pod \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\" (UID: \"9c59f1f7-caf7-4ab4-b405-dbf27330ff37\") " Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.576172 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs" (OuterVolumeSpecName: "logs") pod "9c59f1f7-caf7-4ab4-b405-dbf27330ff37" (UID: "9c59f1f7-caf7-4ab4-b405-dbf27330ff37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.579359 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts" (OuterVolumeSpecName: "scripts") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.582381 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2" (OuterVolumeSpecName: "kube-api-access-qncf2") pod "9c59f1f7-caf7-4ab4-b405-dbf27330ff37" (UID: "9c59f1f7-caf7-4ab4-b405-dbf27330ff37"). InnerVolumeSpecName "kube-api-access-qncf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.582687 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.582901 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.583013 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8" (OuterVolumeSpecName: "kube-api-access-6xcf8") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "kube-api-access-6xcf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.585322 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts" (OuterVolumeSpecName: "scripts") pod "9c59f1f7-caf7-4ab4-b405-dbf27330ff37" (UID: "9c59f1f7-caf7-4ab4-b405-dbf27330ff37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.614005 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c59f1f7-caf7-4ab4-b405-dbf27330ff37" (UID: "9c59f1f7-caf7-4ab4-b405-dbf27330ff37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.615559 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.622256 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data" (OuterVolumeSpecName: "config-data") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.624614 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.637693 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data" (OuterVolumeSpecName: "config-data") pod "9c59f1f7-caf7-4ab4-b405-dbf27330ff37" (UID: "9c59f1f7-caf7-4ab4-b405-dbf27330ff37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.652335 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9686aad4-f2a7-4878-ae8b-f6142e93703a" (UID: "9686aad4-f2a7-4878-ae8b-f6142e93703a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.689881 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691482 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xcf8\" (UniqueName: \"kubernetes.io/projected/9686aad4-f2a7-4878-ae8b-f6142e93703a-kube-api-access-6xcf8\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691504 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691527 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691537 4979 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691547 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691560 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qncf2\" (UniqueName: \"kubernetes.io/projected/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-kube-api-access-qncf2\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691568 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691579 4979 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9686aad4-f2a7-4878-ae8b-f6142e93703a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691589 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:56 crc kubenswrapper[4979]: I0130 22:02:56.691600 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c59f1f7-caf7-4ab4-b405-dbf27330ff37-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.391317 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj64f" event={"ID":"79723cfd-4e3c-446c-bdf1-5c2c997950a8","Type":"ContainerStarted","Data":"87b17ed31e0a099bbbdad24d1f20213b81ce5f1d8bbc12cb5d970696a0596091"} Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.394879 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerStarted","Data":"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5"} Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.394898 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dmn2z" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.394898 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s58pz" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.395731 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.395779 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.422066 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cj64f" podStartSLOduration=3.174150592 podStartE2EDuration="41.422024068s" podCreationTimestamp="2026-01-30 22:02:16 +0000 UTC" firstStartedPulling="2026-01-30 22:02:17.896552449 +0000 UTC m=+1333.857799482" lastFinishedPulling="2026-01-30 22:02:56.144425925 +0000 UTC m=+1372.105672958" observedRunningTime="2026-01-30 22:02:57.421572896 +0000 UTC m=+1373.382819939" watchObservedRunningTime="2026-01-30 22:02:57.422024068 +0000 UTC m=+1373.383271101" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.544118 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:02:57 crc kubenswrapper[4979]: E0130 22:02:57.544591 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9686aad4-f2a7-4878-ae8b-f6142e93703a" containerName="keystone-bootstrap" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.544610 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9686aad4-f2a7-4878-ae8b-f6142e93703a" containerName="keystone-bootstrap" Jan 30 22:02:57 crc kubenswrapper[4979]: E0130 22:02:57.544624 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" containerName="placement-db-sync" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.544632 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" containerName="placement-db-sync" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.544800 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" containerName="placement-db-sync" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.544815 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9686aad4-f2a7-4878-ae8b-f6142e93703a" containerName="keystone-bootstrap" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.545860 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.550337 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.550989 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.552070 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.552130 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.552996 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nknfn" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.560702 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.560767 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.569995 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.636991 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.638951 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.648441 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.648796 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.649136 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dx6hv" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.649820 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.649957 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.650056 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.659414 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.662656 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.665760 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714376 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714516 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714569 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714602 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714633 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714662 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxhx\" (UniqueName: \"kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.714731 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.815965 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816050 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816077 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816101 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqql\" (UniqueName: \"kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816145 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816168 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816220 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816245 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816267 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816304 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816324 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816349 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816372 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816399 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxhx\" (UniqueName: \"kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.816419 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.820135 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.821718 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.824797 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.830948 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.831449 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.833477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.849760 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxhx\" (UniqueName: \"kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx\") pod \"placement-8467c9fd48-4d9pm\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.868743 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918538 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918631 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918683 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqql\" (UniqueName: \"kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918743 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918773 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918813 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918843 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.918910 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.935609 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.935691 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.936158 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.937163 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.937348 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.937600 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.946778 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.969839 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.971683 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:57 crc kubenswrapper[4979]: I0130 22:02:57.977737 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plqql\" (UniqueName: \"kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql\") pod \"keystone-f5778c484-5rg8p\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.001943 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.123498 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.123618 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zhrl\" (UniqueName: \"kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.123927 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.124014 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.124158 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.124251 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.124379 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227077 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zhrl\" (UniqueName: \"kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227232 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227266 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227308 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227346 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.227411 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.228498 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.233633 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.234126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.239410 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.258749 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zhrl\" (UniqueName: \"kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.258845 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.259336 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs\") pod \"placement-5574d874bd-cg256\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.282630 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.317763 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.352986 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.428887 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerStarted","Data":"3f6e1720fbdfa450cb84e7986470398e83ff14833f01c282921516d94399a109"} Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.429544 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.429612 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.628819 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:02:58 crc kubenswrapper[4979]: W0130 22:02:58.668749 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c29874_a63d_4d35_a1a6_256d811ac6f8.slice/crio-3e9edd35208d792f51f192e25b79d4b0f4b1e176ef66384b0abd50fdfae09711 WatchSource:0}: Error finding container 3e9edd35208d792f51f192e25b79d4b0f4b1e176ef66384b0abd50fdfae09711: Status 404 returned error can't find the container with id 3e9edd35208d792f51f192e25b79d4b0f4b1e176ef66384b0abd50fdfae09711 Jan 30 22:02:58 crc kubenswrapper[4979]: I0130 22:02:58.954982 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:02:58 crc kubenswrapper[4979]: W0130 22:02:58.982559 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc808d1a7_071b_4af7_b86d_adbc0e98803b.slice/crio-c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9 WatchSource:0}: Error finding container c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9: Status 404 returned error can't find the container with id c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9 Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.453995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerStarted","Data":"87f8bcdd0e14129a26f5189ed15ff85e52384caaf6a89397573a159ccff40e22"} Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.454470 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerStarted","Data":"7ad5003e1477b67c4d2b787fced03c11f214a9bb0cc53bcbe57eceed0842467d"} Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.456353 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerStarted","Data":"c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9"} Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.458196 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5778c484-5rg8p" event={"ID":"93c29874-a63d-4d35-a1a6-256d811ac6f8","Type":"ContainerStarted","Data":"3e9edd35208d792f51f192e25b79d4b0f4b1e176ef66384b0abd50fdfae09711"} Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.956121 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 22:02:59 crc kubenswrapper[4979]: I0130 22:02:59.956288 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 22:03:00 crc kubenswrapper[4979]: I0130 22:03:00.066044 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:01 crc kubenswrapper[4979]: I0130 22:03:01.139278 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 22:03:01 crc kubenswrapper[4979]: I0130 22:03:01.139811 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 22:03:01 crc kubenswrapper[4979]: I0130 22:03:01.299497 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.497690 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5778c484-5rg8p" event={"ID":"93c29874-a63d-4d35-a1a6-256d811ac6f8","Type":"ContainerStarted","Data":"dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db"} Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.498808 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.502922 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerStarted","Data":"db8279f109bd17f628e44659d3d7f1d466d6bb9b71489014bb4d28dd40cb2a62"} Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.503002 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerStarted","Data":"4bff6c93d10ae5d79c2f86866faa569249ca91ad63e93e5aed7ec9e5c7ae69e3"} Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.503069 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.503092 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.503460 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.503603 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.533417 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f5778c484-5rg8p" podStartSLOduration=5.533383499 podStartE2EDuration="5.533383499s" podCreationTimestamp="2026-01-30 22:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:02.528652433 +0000 UTC m=+1378.489899476" watchObservedRunningTime="2026-01-30 22:03:02.533383499 +0000 UTC m=+1378.494630542" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.555981 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5574d874bd-cg256" podStartSLOduration=5.555956689 podStartE2EDuration="5.555956689s" podCreationTimestamp="2026-01-30 22:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:02.552807265 +0000 UTC m=+1378.514054318" watchObservedRunningTime="2026-01-30 22:03:02.555956689 +0000 UTC m=+1378.517203722" Jan 30 22:03:02 crc kubenswrapper[4979]: I0130 22:03:02.595789 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8467c9fd48-4d9pm" podStartSLOduration=5.595763397 podStartE2EDuration="5.595763397s" podCreationTimestamp="2026-01-30 22:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:02.58044428 +0000 UTC m=+1378.541691313" watchObservedRunningTime="2026-01-30 22:03:02.595763397 +0000 UTC m=+1378.557010430" Jan 30 22:03:04 crc kubenswrapper[4979]: I0130 22:03:04.524322 4979 generic.go:334] "Generic (PLEG): container finished" podID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" containerID="87b17ed31e0a099bbbdad24d1f20213b81ce5f1d8bbc12cb5d970696a0596091" exitCode=0 Jan 30 22:03:04 crc kubenswrapper[4979]: I0130 22:03:04.526159 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj64f" event={"ID":"79723cfd-4e3c-446c-bdf1-5c2c997950a8","Type":"ContainerDied","Data":"87b17ed31e0a099bbbdad24d1f20213b81ce5f1d8bbc12cb5d970696a0596091"} Jan 30 22:03:04 crc kubenswrapper[4979]: I0130 22:03:04.649287 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:05 crc kubenswrapper[4979]: I0130 22:03:05.615946 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.421627 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj64f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.457128 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle\") pod \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.457349 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrndr\" (UniqueName: \"kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr\") pod \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.457419 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data\") pod \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\" (UID: \"79723cfd-4e3c-446c-bdf1-5c2c997950a8\") " Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.468629 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr" (OuterVolumeSpecName: "kube-api-access-zrndr") pod "79723cfd-4e3c-446c-bdf1-5c2c997950a8" (UID: "79723cfd-4e3c-446c-bdf1-5c2c997950a8"). InnerVolumeSpecName "kube-api-access-zrndr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.476192 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "79723cfd-4e3c-446c-bdf1-5c2c997950a8" (UID: "79723cfd-4e3c-446c-bdf1-5c2c997950a8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.493126 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79723cfd-4e3c-446c-bdf1-5c2c997950a8" (UID: "79723cfd-4e3c-446c-bdf1-5c2c997950a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.549138 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cj64f" event={"ID":"79723cfd-4e3c-446c-bdf1-5c2c997950a8","Type":"ContainerDied","Data":"ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e"} Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.549196 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ace17961276b1e777acc172fefbadc89d1c575349207d8532faf89afa712f43e" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.549208 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cj64f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.561004 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrndr\" (UniqueName: \"kubernetes.io/projected/79723cfd-4e3c-446c-bdf1-5c2c997950a8-kube-api-access-zrndr\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.561070 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.561080 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79723cfd-4e3c-446c-bdf1-5c2c997950a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.829819 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:03:06 crc kubenswrapper[4979]: E0130 22:03:06.830258 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" containerName="barbican-db-sync" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.830278 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" containerName="barbican-db-sync" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.830547 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" containerName="barbican-db-sync" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.831786 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.834245 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.834594 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cxc2m" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.837559 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.865372 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.865424 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.865472 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.865524 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxxns\" (UniqueName: \"kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.865563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.887322 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.889610 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.892947 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968186 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968707 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968751 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968809 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxxns\" (UniqueName: \"kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968849 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968891 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968933 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shqww\" (UniqueName: \"kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.968978 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.969006 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.969057 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.970851 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.982335 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:06 crc kubenswrapper[4979]: I0130 22:03:06.983583 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.002159 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.002931 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.004944 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxxns\" (UniqueName: \"kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns\") pod \"barbican-worker-65c8fcd6dc-l7v2f\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.040412 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.042543 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.057227 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.070894 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shqww\" (UniqueName: \"kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.071008 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.071106 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.071165 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.071290 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.076291 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.078896 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.093663 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shqww\" (UniqueName: \"kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.097322 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.114556 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.114986 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle\") pod \"barbican-keystone-listener-7fddd57b54-bjm4k\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.149899 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.151810 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.156833 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.160494 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.166181 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.176929 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.177479 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.177669 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.177783 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.177851 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.178120 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5n8d\" (UniqueName: \"kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.242090 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280394 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280465 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280493 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280540 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280567 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280640 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvw2\" (UniqueName: \"kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280708 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5n8d\" (UniqueName: \"kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280749 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280784 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.280839 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.282126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.286522 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.286583 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.286756 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.289488 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.308118 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5n8d\" (UniqueName: \"kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d\") pod \"dnsmasq-dns-586bdc5f9-9zshd\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.380285 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.382362 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.382433 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.382467 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.382510 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.382575 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtvw2\" (UniqueName: \"kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.383266 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.387114 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.387534 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.387897 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.403904 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtvw2\" (UniqueName: \"kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2\") pod \"barbican-api-5455fcc558-tkb7p\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:07 crc kubenswrapper[4979]: I0130 22:03:07.469963 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.053498 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.069071 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.183178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.192299 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:08 crc kubenswrapper[4979]: W0130 22:03:08.192562 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda48297f7_feed_4cde_9fb5_bb823c838752.slice/crio-c8ca843d70d052f671f8017744034e4dc0dfd5c98d5ad2cc2ba15fb3dd212df5 WatchSource:0}: Error finding container c8ca843d70d052f671f8017744034e4dc0dfd5c98d5ad2cc2ba15fb3dd212df5: Status 404 returned error can't find the container with id c8ca843d70d052f671f8017744034e4dc0dfd5c98d5ad2cc2ba15fb3dd212df5 Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.573362 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerStarted","Data":"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.574371 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.573572 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="proxy-httpd" containerID="cri-o://1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9" gracePeriod=30 Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.573491 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-central-agent" containerID="cri-o://f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122" gracePeriod=30 Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.573683 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-notification-agent" containerID="cri-o://a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d" gracePeriod=30 Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.573713 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="sg-core" containerID="cri-o://5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5" gracePeriod=30 Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.576578 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" event={"ID":"a48297f7-feed-4cde-9fb5-bb823c838752","Type":"ContainerStarted","Data":"c8ca843d70d052f671f8017744034e4dc0dfd5c98d5ad2cc2ba15fb3dd212df5"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.580235 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerStarted","Data":"bb24789e94c037f8d2c30cb247391e1793581183cde1ad3d02b4c483f6507c5b"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.586316 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerStarted","Data":"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.586384 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerStarted","Data":"03e1b95e9a7f4f77b0e701bca53f07e0dfe1f445b0928c440b8370f19dcd14de"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.588669 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerStarted","Data":"3dde96c5169697a3e0c9d8b160bc83a4fafb1d44e05b294c10a09b1f06d958c9"} Jan 30 22:03:08 crc kubenswrapper[4979]: I0130 22:03:08.609772 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.830771924 podStartE2EDuration="52.609752773s" podCreationTimestamp="2026-01-30 22:02:16 +0000 UTC" firstStartedPulling="2026-01-30 22:02:17.784255144 +0000 UTC m=+1333.745502177" lastFinishedPulling="2026-01-30 22:03:07.563235993 +0000 UTC m=+1383.524483026" observedRunningTime="2026-01-30 22:03:08.603824296 +0000 UTC m=+1384.565071329" watchObservedRunningTime="2026-01-30 22:03:08.609752773 +0000 UTC m=+1384.570999806" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613534 4979 generic.go:334] "Generic (PLEG): container finished" podID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerID="1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9" exitCode=0 Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613890 4979 generic.go:334] "Generic (PLEG): container finished" podID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerID="5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5" exitCode=2 Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613898 4979 generic.go:334] "Generic (PLEG): container finished" podID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerID="f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122" exitCode=0 Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613941 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerDied","Data":"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613970 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerDied","Data":"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.613982 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerDied","Data":"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.616054 4979 generic.go:334] "Generic (PLEG): container finished" podID="a48297f7-feed-4cde-9fb5-bb823c838752" containerID="adbbc2a81ab034dd96c63d4ba709ca63691a9f7f475eee828c2446c45a19e39c" exitCode=0 Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.616116 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" event={"ID":"a48297f7-feed-4cde-9fb5-bb823c838752","Type":"ContainerDied","Data":"adbbc2a81ab034dd96c63d4ba709ca63691a9f7f475eee828c2446c45a19e39c"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.619978 4979 generic.go:334] "Generic (PLEG): container finished" podID="8481722d-b63c-4f8e-82e2-0960d719b46b" containerID="d89396dba43eda148feb03a8bfaa17357461f4fc9b9261374a3239bcbd38441a" exitCode=0 Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.620094 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjfmb" event={"ID":"8481722d-b63c-4f8e-82e2-0960d719b46b","Type":"ContainerDied","Data":"d89396dba43eda148feb03a8bfaa17357461f4fc9b9261374a3239bcbd38441a"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.623223 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerStarted","Data":"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.623451 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.633697 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cf4cw" event={"ID":"80aa258c-fc1b-4379-8b50-ac89cb9b4568","Type":"ContainerStarted","Data":"009e01f0d8f5d7eb63f0cb71f39fe5ecce8c1604f3d9fcde721ca558795f16e3"} Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.655501 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.658070 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.661359 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.662580 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.684688 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.684695 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-cf4cw" podStartSLOduration=4.804469838 podStartE2EDuration="54.684674919s" podCreationTimestamp="2026-01-30 22:02:15 +0000 UTC" firstStartedPulling="2026-01-30 22:02:17.631998676 +0000 UTC m=+1333.593245719" lastFinishedPulling="2026-01-30 22:03:07.512203767 +0000 UTC m=+1383.473450800" observedRunningTime="2026-01-30 22:03:09.671335185 +0000 UTC m=+1385.632582218" watchObservedRunningTime="2026-01-30 22:03:09.684674919 +0000 UTC m=+1385.645921952" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.731362 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5455fcc558-tkb7p" podStartSLOduration=2.731339159 podStartE2EDuration="2.731339159s" podCreationTimestamp="2026-01-30 22:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:09.716741982 +0000 UTC m=+1385.677989005" watchObservedRunningTime="2026-01-30 22:03:09.731339159 +0000 UTC m=+1385.692586192" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753541 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753662 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753741 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753840 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753946 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.753969 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw6m2\" (UniqueName: \"kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.855739 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.855834 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.855899 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.855953 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.855973 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.856052 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.856070 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw6m2\" (UniqueName: \"kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.857649 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.864586 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.865245 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.865937 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.867002 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.870138 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.874930 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw6m2\" (UniqueName: \"kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2\") pod \"barbican-api-6cd6984846-6pk8x\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:09 crc kubenswrapper[4979]: I0130 22:03:09.984932 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.644295 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerStarted","Data":"0a36922f832fee9028934a3bf94046644f1757e67d16e088681eff93cf07c0b1"} Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.650386 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" event={"ID":"a48297f7-feed-4cde-9fb5-bb823c838752","Type":"ContainerStarted","Data":"2893c29abd93b15fdfa58149607534288c61efb11cd99143e85b9748d89719b5"} Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.650671 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.662192 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerStarted","Data":"d775e4bedb5dba7162d0b89985eadfea2585c2425816a98d45bf2a5aee52a9dc"} Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.670364 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.692255 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" podStartSLOduration=4.692234205 podStartE2EDuration="4.692234205s" podCreationTimestamp="2026-01-30 22:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:10.690835497 +0000 UTC m=+1386.652082530" watchObservedRunningTime="2026-01-30 22:03:10.692234205 +0000 UTC m=+1386.653481238" Jan 30 22:03:10 crc kubenswrapper[4979]: I0130 22:03:10.726114 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:03:10 crc kubenswrapper[4979]: W0130 22:03:10.739586 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c466a98_f01c_49ab_841a_8f35c54e71f3.slice/crio-ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961 WatchSource:0}: Error finding container ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961: Status 404 returned error can't find the container with id ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961 Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.082691 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.212290 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config\") pod \"8481722d-b63c-4f8e-82e2-0960d719b46b\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.212825 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2\") pod \"8481722d-b63c-4f8e-82e2-0960d719b46b\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.212852 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle\") pod \"8481722d-b63c-4f8e-82e2-0960d719b46b\" (UID: \"8481722d-b63c-4f8e-82e2-0960d719b46b\") " Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.223463 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2" (OuterVolumeSpecName: "kube-api-access-vpvb2") pod "8481722d-b63c-4f8e-82e2-0960d719b46b" (UID: "8481722d-b63c-4f8e-82e2-0960d719b46b"). InnerVolumeSpecName "kube-api-access-vpvb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.268834 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8481722d-b63c-4f8e-82e2-0960d719b46b" (UID: "8481722d-b63c-4f8e-82e2-0960d719b46b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.301176 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config" (OuterVolumeSpecName: "config") pod "8481722d-b63c-4f8e-82e2-0960d719b46b" (UID: "8481722d-b63c-4f8e-82e2-0960d719b46b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.315708 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.315997 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpvb2\" (UniqueName: \"kubernetes.io/projected/8481722d-b63c-4f8e-82e2-0960d719b46b-kube-api-access-vpvb2\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.316132 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8481722d-b63c-4f8e-82e2-0960d719b46b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.707638 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerStarted","Data":"1e3a41213e0b64183674077174838e4b857951ec8d86a2d97f557ed86825981e"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.716135 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerStarted","Data":"9d8dfa3f28e549253bc3c74adc2593d512df4a8ba19da4e9daca2c7d742b4a42"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.753662 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" podStartSLOduration=3.64259039 podStartE2EDuration="5.753636301s" podCreationTimestamp="2026-01-30 22:03:06 +0000 UTC" firstStartedPulling="2026-01-30 22:03:08.08188198 +0000 UTC m=+1384.043129013" lastFinishedPulling="2026-01-30 22:03:10.192927891 +0000 UTC m=+1386.154174924" observedRunningTime="2026-01-30 22:03:11.732095208 +0000 UTC m=+1387.693342241" watchObservedRunningTime="2026-01-30 22:03:11.753636301 +0000 UTC m=+1387.714883334" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.759746 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerStarted","Data":"b87dfaf39281615f48403ce307bb51ad9f7df21ce90a59879ea17a4270453139"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.759806 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerStarted","Data":"edcc79875734fdba9dd8e28171366d93b289c592ed8ec92b3fba51d021505e99"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.759820 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerStarted","Data":"ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.760258 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.760336 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.764182 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qjfmb" event={"ID":"8481722d-b63c-4f8e-82e2-0960d719b46b","Type":"ContainerDied","Data":"a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213"} Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.764236 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6dfcf2666450f941993bd82183ea573b68e922f74cae89ccb55c9417b058213" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.764397 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qjfmb" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.771211 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" podStartSLOduration=3.6373820820000002 podStartE2EDuration="5.771184557s" podCreationTimestamp="2026-01-30 22:03:06 +0000 UTC" firstStartedPulling="2026-01-30 22:03:08.081534101 +0000 UTC m=+1384.042781134" lastFinishedPulling="2026-01-30 22:03:10.215336576 +0000 UTC m=+1386.176583609" observedRunningTime="2026-01-30 22:03:11.75700611 +0000 UTC m=+1387.718253143" watchObservedRunningTime="2026-01-30 22:03:11.771184557 +0000 UTC m=+1387.732431590" Jan 30 22:03:11 crc kubenswrapper[4979]: I0130 22:03:11.808706 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cd6984846-6pk8x" podStartSLOduration=2.8086799940000002 podStartE2EDuration="2.808679994s" podCreationTimestamp="2026-01-30 22:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:11.798433432 +0000 UTC m=+1387.759680465" watchObservedRunningTime="2026-01-30 22:03:11.808679994 +0000 UTC m=+1387.769927027" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.022655 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.073793 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:12 crc kubenswrapper[4979]: E0130 22:03:12.074577 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8481722d-b63c-4f8e-82e2-0960d719b46b" containerName="neutron-db-sync" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.074605 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8481722d-b63c-4f8e-82e2-0960d719b46b" containerName="neutron-db-sync" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.074994 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8481722d-b63c-4f8e-82e2-0960d719b46b" containerName="neutron-db-sync" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.079612 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.139625 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.209728 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.211825 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.219772 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.220157 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.220386 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.220886 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cgj89" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.221749 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250052 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250112 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250215 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dsxp\" (UniqueName: \"kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250260 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250279 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.250347 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.352730 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.352841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.352921 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.352967 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353043 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353085 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353200 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353226 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dsxp\" (UniqueName: \"kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353289 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjbm\" (UniqueName: \"kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353376 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.353405 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.355490 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.355963 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.356502 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.357095 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.357389 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.390973 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dsxp\" (UniqueName: \"kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp\") pod \"dnsmasq-dns-85ff748b95-s2cv2\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.440188 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.454977 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.455481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.455608 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.455682 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.455723 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qjbm\" (UniqueName: \"kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.462368 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.463185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.464046 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.472676 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.493517 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qjbm\" (UniqueName: \"kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm\") pod \"neutron-575496bbc6-tpmv9\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.542999 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.592069 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.762455 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763052 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763115 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763158 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763240 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763299 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwcxd\" (UniqueName: \"kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.763368 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data\") pod \"6043875b-c6a4-4cbd-919e-79a61239eaa6\" (UID: \"6043875b-c6a4-4cbd-919e-79a61239eaa6\") " Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.764104 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.764751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.768658 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd" (OuterVolumeSpecName: "kube-api-access-xwcxd") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "kube-api-access-xwcxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.787436 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts" (OuterVolumeSpecName: "scripts") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.810050 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.840933 4979 generic.go:334] "Generic (PLEG): container finished" podID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerID="a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d" exitCode=0 Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.841490 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.841564 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerDied","Data":"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d"} Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.841616 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6043875b-c6a4-4cbd-919e-79a61239eaa6","Type":"ContainerDied","Data":"6a2d854ec1dbd82bcfa5a4f7a9ec2e600f535da300f6990faf526c3822b41bfd"} Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.841641 4979 scope.go:117] "RemoveContainer" containerID="1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.845227 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="dnsmasq-dns" containerID="cri-o://2893c29abd93b15fdfa58149607534288c61efb11cd99143e85b9748d89719b5" gracePeriod=10 Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.866123 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwcxd\" (UniqueName: \"kubernetes.io/projected/6043875b-c6a4-4cbd-919e-79a61239eaa6-kube-api-access-xwcxd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.866157 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.866166 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.866175 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.866184 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6043875b-c6a4-4cbd-919e-79a61239eaa6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.902269 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.917811 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data" (OuterVolumeSpecName: "config-data") pod "6043875b-c6a4-4cbd-919e-79a61239eaa6" (UID: "6043875b-c6a4-4cbd-919e-79a61239eaa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.969287 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:12 crc kubenswrapper[4979]: I0130 22:03:12.969324 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6043875b-c6a4-4cbd-919e-79a61239eaa6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.032641 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.170895 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.188590 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.221075 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:13 crc kubenswrapper[4979]: E0130 22:03:13.221995 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-central-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222110 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-central-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: E0130 22:03:13.222144 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="sg-core" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222152 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="sg-core" Jan 30 22:03:13 crc kubenswrapper[4979]: E0130 22:03:13.222174 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="proxy-httpd" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222180 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="proxy-httpd" Jan 30 22:03:13 crc kubenswrapper[4979]: E0130 22:03:13.222195 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-notification-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222204 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-notification-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222473 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="proxy-httpd" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222515 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-notification-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222532 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="ceilometer-central-agent" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.222552 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" containerName="sg-core" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.225547 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.229565 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.229844 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.230190 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.346167 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.381169 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.381516 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.381833 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.381947 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fnwm\" (UniqueName: \"kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.382019 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.382180 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.382359 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484584 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484706 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484788 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484831 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fnwm\" (UniqueName: \"kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484870 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484916 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.484989 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.485593 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.485966 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.494477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.495304 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.503014 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.503965 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.506405 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fnwm\" (UniqueName: \"kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm\") pod \"ceilometer-0\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.548997 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.853192 4979 generic.go:334] "Generic (PLEG): container finished" podID="a48297f7-feed-4cde-9fb5-bb823c838752" containerID="2893c29abd93b15fdfa58149607534288c61efb11cd99143e85b9748d89719b5" exitCode=0 Jan 30 22:03:13 crc kubenswrapper[4979]: I0130 22:03:13.853256 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" event={"ID":"a48297f7-feed-4cde-9fb5-bb823c838752","Type":"ContainerDied","Data":"2893c29abd93b15fdfa58149607534288c61efb11cd99143e85b9748d89719b5"} Jan 30 22:03:14 crc kubenswrapper[4979]: I0130 22:03:14.984269 4979 scope.go:117] "RemoveContainer" containerID="5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5" Jan 30 22:03:14 crc kubenswrapper[4979]: W0130 22:03:14.995489 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc2890a_dab6_4a6e_a7fd_a26feb5b2bb8.slice/crio-0212d06a744f8cd1b66d318c030ed6a7f7216496fa3e7ef430e0ba4efdf447a9 WatchSource:0}: Error finding container 0212d06a744f8cd1b66d318c030ed6a7f7216496fa3e7ef430e0ba4efdf447a9: Status 404 returned error can't find the container with id 0212d06a744f8cd1b66d318c030ed6a7f7216496fa3e7ef430e0ba4efdf447a9 Jan 30 22:03:14 crc kubenswrapper[4979]: W0130 22:03:14.996564 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba4b7345_9c9c_46e9_ac9a_d84093867012.slice/crio-d5b0558da39d39eea1a978ceb8d04e793a4cb1b04e75dc57e8d0bbef896534cc WatchSource:0}: Error finding container d5b0558da39d39eea1a978ceb8d04e793a4cb1b04e75dc57e8d0bbef896534cc: Status 404 returned error can't find the container with id d5b0558da39d39eea1a978ceb8d04e793a4cb1b04e75dc57e8d0bbef896534cc Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.088905 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6043875b-c6a4-4cbd-919e-79a61239eaa6" path="/var/lib/kubelet/pods/6043875b-c6a4-4cbd-919e-79a61239eaa6/volumes" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.213053 4979 scope.go:117] "RemoveContainer" containerID="a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.316831 4979 scope.go:117] "RemoveContainer" containerID="f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.356450 4979 scope.go:117] "RemoveContainer" containerID="1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9" Jan 30 22:03:15 crc kubenswrapper[4979]: E0130 22:03:15.358965 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9\": container with ID starting with 1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9 not found: ID does not exist" containerID="1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.359075 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9"} err="failed to get container status \"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9\": rpc error: code = NotFound desc = could not find container \"1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9\": container with ID starting with 1c7366d931d887f0f9145c96389a0ade64d9dc657014823d01ebb27af3f386a9 not found: ID does not exist" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.359140 4979 scope.go:117] "RemoveContainer" containerID="5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5" Jan 30 22:03:15 crc kubenswrapper[4979]: E0130 22:03:15.360714 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5\": container with ID starting with 5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5 not found: ID does not exist" containerID="5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.360765 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5"} err="failed to get container status \"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5\": rpc error: code = NotFound desc = could not find container \"5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5\": container with ID starting with 5daf75d19e84e8e80795ad31a77283ea7aafdfebe75e74647c88aa62d95fa7c5 not found: ID does not exist" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.360800 4979 scope.go:117] "RemoveContainer" containerID="a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d" Jan 30 22:03:15 crc kubenswrapper[4979]: E0130 22:03:15.362015 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d\": container with ID starting with a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d not found: ID does not exist" containerID="a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.362122 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d"} err="failed to get container status \"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d\": rpc error: code = NotFound desc = could not find container \"a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d\": container with ID starting with a6a9c594a52dcdf39f7a148f67bd983973fe8b830ac29d255fe82d0b9d85526d not found: ID does not exist" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.362191 4979 scope.go:117] "RemoveContainer" containerID="f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122" Jan 30 22:03:15 crc kubenswrapper[4979]: E0130 22:03:15.362991 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122\": container with ID starting with f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122 not found: ID does not exist" containerID="f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.363057 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122"} err="failed to get container status \"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122\": rpc error: code = NotFound desc = could not find container \"f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122\": container with ID starting with f1522f1ddf26c6ea4b595cc1ae43ab4b76959eb106e3eb53113087fa07856122 not found: ID does not exist" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.397211 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.664589 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.667449 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.671276 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.671784 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.716374 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746094 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746236 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746283 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7gb\" (UniqueName: \"kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746330 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746440 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746689 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.746914 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.833967 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.849481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.849665 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.849752 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.849816 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7gb\" (UniqueName: \"kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.850024 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.850157 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.850398 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.860237 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.873121 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.877297 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.880468 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.886577 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7gb\" (UniqueName: \"kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.888686 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.888916 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs\") pod \"neutron-ccc5789d5-9fbcz\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.898020 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerStarted","Data":"6dbed89dcb99abab4522a3860a00ee5c7bea5cb37a875572e8e74067b72a1d9c"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.902585 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerStarted","Data":"a9f9f27cea01a15c9754036e794a52a02aaf9c4cde1417cb268dd678a86d49a7"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.902650 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerStarted","Data":"31b519ed42ee2d318c5e8593b192627b5f74f877124ccf9521649301b379434d"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.902668 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerStarted","Data":"d5b0558da39d39eea1a978ceb8d04e793a4cb1b04e75dc57e8d0bbef896534cc"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.902934 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.909298 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" event={"ID":"a48297f7-feed-4cde-9fb5-bb823c838752","Type":"ContainerDied","Data":"c8ca843d70d052f671f8017744034e4dc0dfd5c98d5ad2cc2ba15fb3dd212df5"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.909366 4979 scope.go:117] "RemoveContainer" containerID="2893c29abd93b15fdfa58149607534288c61efb11cd99143e85b9748d89719b5" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.909430 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-9zshd" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.911056 4979 generic.go:334] "Generic (PLEG): container finished" podID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerID="28fa5fdce3759a70252b84e9d2a3128dd1ea647aeca78f30af0e925e772a5b64" exitCode=0 Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.911109 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" event={"ID":"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8","Type":"ContainerDied","Data":"28fa5fdce3759a70252b84e9d2a3128dd1ea647aeca78f30af0e925e772a5b64"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.911133 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" event={"ID":"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8","Type":"ContainerStarted","Data":"0212d06a744f8cd1b66d318c030ed6a7f7216496fa3e7ef430e0ba4efdf447a9"} Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.938170 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-575496bbc6-tpmv9" podStartSLOduration=3.938143052 podStartE2EDuration="3.938143052s" podCreationTimestamp="2026-01-30 22:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:15.92189522 +0000 UTC m=+1391.883142273" watchObservedRunningTime="2026-01-30 22:03:15.938143052 +0000 UTC m=+1391.899390085" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953266 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953343 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5n8d\" (UniqueName: \"kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953427 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953519 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953669 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.953721 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb\") pod \"a48297f7-feed-4cde-9fb5-bb823c838752\" (UID: \"a48297f7-feed-4cde-9fb5-bb823c838752\") " Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.965262 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d" (OuterVolumeSpecName: "kube-api-access-w5n8d") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "kube-api-access-w5n8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.993912 4979 scope.go:117] "RemoveContainer" containerID="adbbc2a81ab034dd96c63d4ba709ca63691a9f7f475eee828c2446c45a19e39c" Jan 30 22:03:15 crc kubenswrapper[4979]: I0130 22:03:15.997678 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.048095 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.048092 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.056284 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.056300 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config" (OuterVolumeSpecName: "config") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.060062 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.060102 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.060115 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.060127 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.060139 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5n8d\" (UniqueName: \"kubernetes.io/projected/a48297f7-feed-4cde-9fb5-bb823c838752-kube-api-access-w5n8d\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.064773 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a48297f7-feed-4cde-9fb5-bb823c838752" (UID: "a48297f7-feed-4cde-9fb5-bb823c838752"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.165696 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a48297f7-feed-4cde-9fb5-bb823c838752-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.313194 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.332798 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-9zshd"] Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.710918 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.947343 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" event={"ID":"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8","Type":"ContainerStarted","Data":"079a608dc24b31a4f88315dc45c4eac9e51e3ae04392a654b2e5881b47f5deea"} Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.949591 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.951400 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerStarted","Data":"ca8441f7e30661b52f9821e4f8bade797db77f1bc59f74f658c35d0b1cade61a"} Jan 30 22:03:16 crc kubenswrapper[4979]: I0130 22:03:16.958144 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerStarted","Data":"d353219cc9b3f8542020689ad8fe1dc4cafe48d65da929904d82b00146b5cd56"} Jan 30 22:03:17 crc kubenswrapper[4979]: I0130 22:03:17.095759 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" path="/var/lib/kubelet/pods/a48297f7-feed-4cde-9fb5-bb823c838752/volumes" Jan 30 22:03:17 crc kubenswrapper[4979]: I0130 22:03:17.971705 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerStarted","Data":"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260"} Jan 30 22:03:17 crc kubenswrapper[4979]: I0130 22:03:17.972617 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerStarted","Data":"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a"} Jan 30 22:03:17 crc kubenswrapper[4979]: I0130 22:03:17.972640 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:17 crc kubenswrapper[4979]: I0130 22:03:17.975185 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerStarted","Data":"8198ed5db540cb004bee8f636d59637892dad01fde8c2addcb3d150233b81eb8"} Jan 30 22:03:18 crc kubenswrapper[4979]: I0130 22:03:18.000994 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-ccc5789d5-9fbcz" podStartSLOduration=3.000968471 podStartE2EDuration="3.000968471s" podCreationTimestamp="2026-01-30 22:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:17.996501822 +0000 UTC m=+1393.957748875" watchObservedRunningTime="2026-01-30 22:03:18.000968471 +0000 UTC m=+1393.962215504" Jan 30 22:03:18 crc kubenswrapper[4979]: I0130 22:03:18.003358 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" podStartSLOduration=6.003347293 podStartE2EDuration="6.003347293s" podCreationTimestamp="2026-01-30 22:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:16.997557416 +0000 UTC m=+1392.958804449" watchObservedRunningTime="2026-01-30 22:03:18.003347293 +0000 UTC m=+1393.964594326" Jan 30 22:03:19 crc kubenswrapper[4979]: I0130 22:03:19.010933 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerStarted","Data":"379541b071bcc3ff3b76c9a28614a8a7781d3946bd15e75deec7d7faf821f69f"} Jan 30 22:03:19 crc kubenswrapper[4979]: I0130 22:03:19.579351 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:19 crc kubenswrapper[4979]: I0130 22:03:19.641335 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:20 crc kubenswrapper[4979]: I0130 22:03:20.022864 4979 generic.go:334] "Generic (PLEG): container finished" podID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" containerID="009e01f0d8f5d7eb63f0cb71f39fe5ecce8c1604f3d9fcde721ca558795f16e3" exitCode=0 Jan 30 22:03:20 crc kubenswrapper[4979]: I0130 22:03:20.024224 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cf4cw" event={"ID":"80aa258c-fc1b-4379-8b50-ac89cb9b4568","Type":"ContainerDied","Data":"009e01f0d8f5d7eb63f0cb71f39fe5ecce8c1604f3d9fcde721ca558795f16e3"} Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.497686 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.599552 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.599629 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.599770 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njts7\" (UniqueName: \"kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.600045 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.600076 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.600161 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data\") pod \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\" (UID: \"80aa258c-fc1b-4379-8b50-ac89cb9b4568\") " Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.600359 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.600667 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80aa258c-fc1b-4379-8b50-ac89cb9b4568-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.619447 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts" (OuterVolumeSpecName: "scripts") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.622508 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7" (OuterVolumeSpecName: "kube-api-access-njts7") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "kube-api-access-njts7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.633204 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.674202 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data" (OuterVolumeSpecName: "config-data") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.681525 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80aa258c-fc1b-4379-8b50-ac89cb9b4568" (UID: "80aa258c-fc1b-4379-8b50-ac89cb9b4568"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.702518 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.702568 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.702583 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.702595 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80aa258c-fc1b-4379-8b50-ac89cb9b4568-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.702605 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njts7\" (UniqueName: \"kubernetes.io/projected/80aa258c-fc1b-4379-8b50-ac89cb9b4568-kube-api-access-njts7\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:21 crc kubenswrapper[4979]: I0130 22:03:21.999866 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.048998 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerStarted","Data":"b4264ee1205b9f14594303d45e026381cac0a39c9757db5ba8d73f991ffb0e32"} Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.049193 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.051487 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cf4cw" event={"ID":"80aa258c-fc1b-4379-8b50-ac89cb9b4568","Type":"ContainerDied","Data":"aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3"} Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.051519 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa21a6f3e8e7a60f26b7105869748a94bb2157b238e798b219f1aa067289e1a3" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.051569 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cf4cw" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.075322 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.2735942 podStartE2EDuration="9.075289773s" podCreationTimestamp="2026-01-30 22:03:13 +0000 UTC" firstStartedPulling="2026-01-30 22:03:15.407136666 +0000 UTC m=+1391.368383699" lastFinishedPulling="2026-01-30 22:03:21.208832229 +0000 UTC m=+1397.170079272" observedRunningTime="2026-01-30 22:03:22.072876748 +0000 UTC m=+1398.034123781" watchObservedRunningTime="2026-01-30 22:03:22.075289773 +0000 UTC m=+1398.036536816" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.127116 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.214207 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.214461 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5455fcc558-tkb7p" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api-log" containerID="cri-o://73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8" gracePeriod=30 Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.214943 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5455fcc558-tkb7p" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api" containerID="cri-o://3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67" gracePeriod=30 Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.435128 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:22 crc kubenswrapper[4979]: E0130 22:03:22.445539 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="dnsmasq-dns" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.445600 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="dnsmasq-dns" Jan 30 22:03:22 crc kubenswrapper[4979]: E0130 22:03:22.445641 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="init" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.445648 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="init" Jan 30 22:03:22 crc kubenswrapper[4979]: E0130 22:03:22.445666 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" containerName="cinder-db-sync" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.445673 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" containerName="cinder-db-sync" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.445957 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" containerName="cinder-db-sync" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.445981 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a48297f7-feed-4cde-9fb5-bb823c838752" containerName="dnsmasq-dns" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.447050 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.448105 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.450831 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5h7pb" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.451482 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.452718 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.457799 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.479740 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532646 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7trcb\" (UniqueName: \"kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532827 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532881 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532912 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.532938 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.606321 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.606570 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="dnsmasq-dns" containerID="cri-o://1d1c26d6f08b899fc938d9e9e56bd49d29a4055ed2b289e8b5b646f2046dec68" gracePeriod=10 Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635475 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635545 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635586 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635611 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635671 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7trcb\" (UniqueName: \"kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635725 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.635822 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.644130 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.644527 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.646588 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.646742 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.650647 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.670127 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.701800 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.701988 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7trcb\" (UniqueName: \"kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb\") pod \"cinder-scheduler-0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.741771 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.741886 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.741916 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.741939 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqtbq\" (UniqueName: \"kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.742004 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.742073 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.781826 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.812130 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.813982 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.819360 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.821622 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.845893 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846023 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846066 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846090 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846109 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846134 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tcv\" (UniqueName: \"kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846157 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846190 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846217 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846280 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846312 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846342 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.846363 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqtbq\" (UniqueName: \"kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.847670 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.847829 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.848232 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.849288 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.859196 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.867060 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqtbq\" (UniqueName: \"kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq\") pod \"dnsmasq-dns-5c9776ccc5-nph2b\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.918631 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951576 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951630 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951663 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951690 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tcv\" (UniqueName: \"kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951764 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951816 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951897 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.951992 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.952720 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.966206 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.972334 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.977643 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.994306 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tcv\" (UniqueName: \"kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:22 crc kubenswrapper[4979]: I0130 22:03:22.994384 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data\") pod \"cinder-api-0\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " pod="openstack/cinder-api-0" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.092953 4979 generic.go:334] "Generic (PLEG): container finished" podID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerID="73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8" exitCode=143 Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.098108 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerDied","Data":"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8"} Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.105487 4979 generic.go:334] "Generic (PLEG): container finished" podID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerID="1d1c26d6f08b899fc938d9e9e56bd49d29a4055ed2b289e8b5b646f2046dec68" exitCode=0 Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.105770 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" event={"ID":"734e25b4-90d2-466b-a71d-029b7fd4b491","Type":"ContainerDied","Data":"1d1c26d6f08b899fc938d9e9e56bd49d29a4055ed2b289e8b5b646f2046dec68"} Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.243751 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.470072 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.476614 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l7jp\" (UniqueName: \"kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.479615 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.480258 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.525387 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp" (OuterVolumeSpecName: "kube-api-access-4l7jp") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "kube-api-access-4l7jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.584300 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.584370 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.584421 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb\") pod \"734e25b4-90d2-466b-a71d-029b7fd4b491\" (UID: \"734e25b4-90d2-466b-a71d-029b7fd4b491\") " Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.585162 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l7jp\" (UniqueName: \"kubernetes.io/projected/734e25b4-90d2-466b-a71d-029b7fd4b491-kube-api-access-4l7jp\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.590207 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.633143 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.656773 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.677506 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.693068 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.693125 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.717272 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.746594 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config" (OuterVolumeSpecName: "config") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.752327 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "734e25b4-90d2-466b-a71d-029b7fd4b491" (UID: "734e25b4-90d2-466b-a71d-029b7fd4b491"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.796692 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.797165 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.797205 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/734e25b4-90d2-466b-a71d-029b7fd4b491-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:23 crc kubenswrapper[4979]: W0130 22:03:23.897637 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0bc9229_6c16_4bd2_b677_f26acb49716e.slice/crio-8e7c65fe5fd0a55ee90e356ead049d0c73bccc76a87378beab8412f7890e9da6 WatchSource:0}: Error finding container 8e7c65fe5fd0a55ee90e356ead049d0c73bccc76a87378beab8412f7890e9da6: Status 404 returned error can't find the container with id 8e7c65fe5fd0a55ee90e356ead049d0c73bccc76a87378beab8412f7890e9da6 Jan 30 22:03:23 crc kubenswrapper[4979]: I0130 22:03:23.907964 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.125609 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerStarted","Data":"d466d90f2d37f6a5ffe695492f5a86148cdb526bdbc83ccf9934c5bdbb75a655"} Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.125683 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerStarted","Data":"bd0c08ab5da0f9972ab0ecfaa7d4a96b3e692f626faf2e99b754b19a6fd17552"} Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.129314 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerStarted","Data":"9e233c467b56b274cf91a0fd383468a12ee48c944ec900a8f2ba3fafe0a3e4a7"} Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.139324 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerStarted","Data":"8e7c65fe5fd0a55ee90e356ead049d0c73bccc76a87378beab8412f7890e9da6"} Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.146189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" event={"ID":"734e25b4-90d2-466b-a71d-029b7fd4b491","Type":"ContainerDied","Data":"0bbffd435fbf3836f4de2a4551e90534d72d8f16d6de3150a0817077872230f4"} Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.146535 4979 scope.go:117] "RemoveContainer" containerID="1d1c26d6f08b899fc938d9e9e56bd49d29a4055ed2b289e8b5b646f2046dec68" Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.146841 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-plpcc" Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.188361 4979 scope.go:117] "RemoveContainer" containerID="a84e16cda693df587eff75844a45206ef87069920f6876c4a2c9eb4f7fae9fbe" Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.197352 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:03:24 crc kubenswrapper[4979]: I0130 22:03:24.205339 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-plpcc"] Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.090000 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" path="/var/lib/kubelet/pods/734e25b4-90d2-466b-a71d-029b7fd4b491/volumes" Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.162711 4979 generic.go:334] "Generic (PLEG): container finished" podID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerID="d466d90f2d37f6a5ffe695492f5a86148cdb526bdbc83ccf9934c5bdbb75a655" exitCode=0 Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.163176 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerDied","Data":"d466d90f2d37f6a5ffe695492f5a86148cdb526bdbc83ccf9934c5bdbb75a655"} Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.163475 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerStarted","Data":"cde1d8ef9853814ac0538e668f22acd209e1123ba255255d91b5dde006032de3"} Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.163605 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.185561 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerStarted","Data":"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419"} Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.218890 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" podStartSLOduration=3.218861002 podStartE2EDuration="3.218861002s" podCreationTimestamp="2026-01-30 22:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:25.203969995 +0000 UTC m=+1401.165217018" watchObservedRunningTime="2026-01-30 22:03:25.218861002 +0000 UTC m=+1401.180108025" Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.438402 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.463677 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5455fcc558-tkb7p" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:37790->10.217.0.159:9311: read: connection reset by peer" Jan 30 22:03:25 crc kubenswrapper[4979]: I0130 22:03:25.463676 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5455fcc558-tkb7p" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:37786->10.217.0.159:9311: read: connection reset by peer" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.039965 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.048932 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom\") pod \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.048987 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data\") pod \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.049847 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtvw2\" (UniqueName: \"kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2\") pod \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.050009 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs\") pod \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.050131 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle\") pod \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\" (UID: \"0aa8f9d6-442a-4070-b11f-13564f4c2c43\") " Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.050650 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs" (OuterVolumeSpecName: "logs") pod "0aa8f9d6-442a-4070-b11f-13564f4c2c43" (UID: "0aa8f9d6-442a-4070-b11f-13564f4c2c43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.051266 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa8f9d6-442a-4070-b11f-13564f4c2c43-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.057760 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0aa8f9d6-442a-4070-b11f-13564f4c2c43" (UID: "0aa8f9d6-442a-4070-b11f-13564f4c2c43"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.060184 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2" (OuterVolumeSpecName: "kube-api-access-xtvw2") pod "0aa8f9d6-442a-4070-b11f-13564f4c2c43" (UID: "0aa8f9d6-442a-4070-b11f-13564f4c2c43"). InnerVolumeSpecName "kube-api-access-xtvw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.089245 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aa8f9d6-442a-4070-b11f-13564f4c2c43" (UID: "0aa8f9d6-442a-4070-b11f-13564f4c2c43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.116876 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data" (OuterVolumeSpecName: "config-data") pod "0aa8f9d6-442a-4070-b11f-13564f4c2c43" (UID: "0aa8f9d6-442a-4070-b11f-13564f4c2c43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.153157 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.153192 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.153202 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtvw2\" (UniqueName: \"kubernetes.io/projected/0aa8f9d6-442a-4070-b11f-13564f4c2c43-kube-api-access-xtvw2\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.153212 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa8f9d6-442a-4070-b11f-13564f4c2c43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.226928 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerStarted","Data":"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91"} Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.227125 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api-log" containerID="cri-o://b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" gracePeriod=30 Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.227386 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.227677 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api" containerID="cri-o://ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" gracePeriod=30 Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.241244 4979 generic.go:334] "Generic (PLEG): container finished" podID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerID="3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67" exitCode=0 Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.241328 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerDied","Data":"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67"} Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.241370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5455fcc558-tkb7p" event={"ID":"0aa8f9d6-442a-4070-b11f-13564f4c2c43","Type":"ContainerDied","Data":"03e1b95e9a7f4f77b0e701bca53f07e0dfe1f445b0928c440b8370f19dcd14de"} Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.241395 4979 scope.go:117] "RemoveContainer" containerID="3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.241527 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5455fcc558-tkb7p" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.248091 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerStarted","Data":"dd60a59ae6cdfbc405f90d689ab84d25f406577d4b685fef4f1f04460e816ffb"} Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.265927 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.265893786 podStartE2EDuration="4.265893786s" podCreationTimestamp="2026-01-30 22:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:26.251055071 +0000 UTC m=+1402.212302124" watchObservedRunningTime="2026-01-30 22:03:26.265893786 +0000 UTC m=+1402.227140829" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.329799 4979 scope.go:117] "RemoveContainer" containerID="73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.383778 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.406093 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5455fcc558-tkb7p"] Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.429539 4979 scope.go:117] "RemoveContainer" containerID="3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67" Jan 30 22:03:26 crc kubenswrapper[4979]: E0130 22:03:26.430310 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67\": container with ID starting with 3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67 not found: ID does not exist" containerID="3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.430385 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67"} err="failed to get container status \"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67\": rpc error: code = NotFound desc = could not find container \"3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67\": container with ID starting with 3a467649c13eb55f15b6f3b87b9ad51a348d45f403094acb5c8a9db23fd65f67 not found: ID does not exist" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.430433 4979 scope.go:117] "RemoveContainer" containerID="73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8" Jan 30 22:03:26 crc kubenswrapper[4979]: E0130 22:03:26.432765 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8\": container with ID starting with 73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8 not found: ID does not exist" containerID="73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8" Jan 30 22:03:26 crc kubenswrapper[4979]: I0130 22:03:26.432837 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8"} err="failed to get container status \"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8\": rpc error: code = NotFound desc = could not find container \"73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8\": container with ID starting with 73bcdb645d2048b7a24acf1421249f992c15d0ba9eaf001201ebd92a3cf203a8 not found: ID does not exist" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.086264 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" path="/var/lib/kubelet/pods/0aa8f9d6-442a-4070-b11f-13564f4c2c43/volumes" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.228862 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.264782 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerStarted","Data":"af1e56adf69dc8dcae71e643ccc863182f7586ad5f57a96be638e265eb505d2d"} Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267121 4979 generic.go:334] "Generic (PLEG): container finished" podID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerID="ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" exitCode=0 Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267156 4979 generic.go:334] "Generic (PLEG): container finished" podID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerID="b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" exitCode=143 Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267258 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267453 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerDied","Data":"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91"} Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267511 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerDied","Data":"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419"} Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267535 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0bc9229-6c16-4bd2-b677-f26acb49716e","Type":"ContainerDied","Data":"8e7c65fe5fd0a55ee90e356ead049d0c73bccc76a87378beab8412f7890e9da6"} Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.267563 4979 scope.go:117] "RemoveContainer" containerID="ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299047 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299130 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299211 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299243 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66tcv\" (UniqueName: \"kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299365 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.299386 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle\") pod \"c0bc9229-6c16-4bd2-b677-f26acb49716e\" (UID: \"c0bc9229-6c16-4bd2-b677-f26acb49716e\") " Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.300269 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs" (OuterVolumeSpecName: "logs") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.304405 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.310287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts" (OuterVolumeSpecName: "scripts") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.310535 4979 scope.go:117] "RemoveContainer" containerID="b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.312673 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv" (OuterVolumeSpecName: "kube-api-access-66tcv") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "kube-api-access-66tcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.313346 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.323993 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.404963812 podStartE2EDuration="5.323965903s" podCreationTimestamp="2026-01-30 22:03:22 +0000 UTC" firstStartedPulling="2026-01-30 22:03:23.592053394 +0000 UTC m=+1399.553300417" lastFinishedPulling="2026-01-30 22:03:24.511055475 +0000 UTC m=+1400.472302508" observedRunningTime="2026-01-30 22:03:27.301589749 +0000 UTC m=+1403.262836792" watchObservedRunningTime="2026-01-30 22:03:27.323965903 +0000 UTC m=+1403.285212936" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.371449 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.386273 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data" (OuterVolumeSpecName: "config-data") pod "c0bc9229-6c16-4bd2-b677-f26acb49716e" (UID: "c0bc9229-6c16-4bd2-b677-f26acb49716e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402672 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402742 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402758 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402772 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66tcv\" (UniqueName: \"kubernetes.io/projected/c0bc9229-6c16-4bd2-b677-f26acb49716e-kube-api-access-66tcv\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402787 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0bc9229-6c16-4bd2-b677-f26acb49716e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402794 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc9229-6c16-4bd2-b677-f26acb49716e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.402801 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc9229-6c16-4bd2-b677-f26acb49716e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.481173 4979 scope.go:117] "RemoveContainer" containerID="ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.481735 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91\": container with ID starting with ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91 not found: ID does not exist" containerID="ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.481808 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91"} err="failed to get container status \"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91\": rpc error: code = NotFound desc = could not find container \"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91\": container with ID starting with ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91 not found: ID does not exist" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.481853 4979 scope.go:117] "RemoveContainer" containerID="b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.482291 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419\": container with ID starting with b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419 not found: ID does not exist" containerID="b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.482709 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419"} err="failed to get container status \"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419\": rpc error: code = NotFound desc = could not find container \"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419\": container with ID starting with b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419 not found: ID does not exist" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.482735 4979 scope.go:117] "RemoveContainer" containerID="ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.483851 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91"} err="failed to get container status \"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91\": rpc error: code = NotFound desc = could not find container \"ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91\": container with ID starting with ab623eec00c15a616c091a10a2c981d899f35b26c497a731b51f318173b73e91 not found: ID does not exist" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.483894 4979 scope.go:117] "RemoveContainer" containerID="b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.484313 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419"} err="failed to get container status \"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419\": rpc error: code = NotFound desc = could not find container \"b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419\": container with ID starting with b3abeb4cd8cb2ae3b604378983e8098e9945aa382ea8e88b92aec1e127a37419 not found: ID does not exist" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.617555 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.641224 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.661786 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.662677 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="init" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.662774 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="init" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.662887 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="dnsmasq-dns" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.662946 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="dnsmasq-dns" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.663045 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663133 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.663227 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663283 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.663358 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663421 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: E0130 22:03:27.663485 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663553 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663840 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.663959 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa8f9d6-442a-4070-b11f-13564f4c2c43" containerName="barbican-api" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.664047 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="734e25b4-90d2-466b-a71d-029b7fd4b491" containerName="dnsmasq-dns" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.664118 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api-log" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.664175 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" containerName="cinder-api" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.665368 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.670485 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.671095 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.671181 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.677909 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.708703 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.708761 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709116 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709241 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5tg\" (UniqueName: \"kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709292 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709331 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709378 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709445 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.709501 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.782848 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.810780 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.810871 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.810917 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.810947 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.810987 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811049 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811087 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv5tg\" (UniqueName: \"kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811123 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811147 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811193 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.811813 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.816535 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.816687 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.816903 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.819073 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.819999 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.820922 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:27 crc kubenswrapper[4979]: I0130 22:03:27.834749 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv5tg\" (UniqueName: \"kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg\") pod \"cinder-api-0\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " pod="openstack/cinder-api-0" Jan 30 22:03:28 crc kubenswrapper[4979]: I0130 22:03:28.028300 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:03:28 crc kubenswrapper[4979]: I0130 22:03:28.496969 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.121388 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0bc9229-6c16-4bd2-b677-f26acb49716e" path="/var/lib/kubelet/pods/c0bc9229-6c16-4bd2-b677-f26acb49716e/volumes" Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.324081 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerStarted","Data":"70c9e4b75f4b6026504bbe59f295f79a6dc13bad465ac3a98878072f04debbd7"} Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.324547 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerStarted","Data":"63cab1632ab5734414fe0ad9e4d6c6c07d6d67f4ee2af410de1ca78ec4b0eb26"} Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.417177 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.604139 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.683819 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.686735 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8467c9fd48-4d9pm" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-api" containerID="cri-o://87f8bcdd0e14129a26f5189ed15ff85e52384caaf6a89397573a159ccff40e22" gracePeriod=30 Jan 30 22:03:29 crc kubenswrapper[4979]: I0130 22:03:29.686882 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8467c9fd48-4d9pm" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-log" containerID="cri-o://7ad5003e1477b67c4d2b787fced03c11f214a9bb0cc53bcbe57eceed0842467d" gracePeriod=30 Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.240293 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.336591 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerStarted","Data":"33be242a70bfcf61aafc753268bb59c2e8a2a55bfc2666cef9e675491b558cd9"} Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.336846 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.341111 4979 generic.go:334] "Generic (PLEG): container finished" podID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerID="7ad5003e1477b67c4d2b787fced03c11f214a9bb0cc53bcbe57eceed0842467d" exitCode=143 Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.341301 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerDied","Data":"7ad5003e1477b67c4d2b787fced03c11f214a9bb0cc53bcbe57eceed0842467d"} Jan 30 22:03:30 crc kubenswrapper[4979]: I0130 22:03:30.377062 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.377019347 podStartE2EDuration="3.377019347s" podCreationTimestamp="2026-01-30 22:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:30.357708013 +0000 UTC m=+1406.318955046" watchObservedRunningTime="2026-01-30 22:03:30.377019347 +0000 UTC m=+1406.338266380" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.923351 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.925856 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.928201 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.929647 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9brkn" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.929883 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.943368 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 22:03:32 crc kubenswrapper[4979]: I0130 22:03:32.947960 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.065087 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.065485 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="dnsmasq-dns" containerID="cri-o://079a608dc24b31a4f88315dc45c4eac9e51e3ae04392a654b2e5881b47f5deea" gracePeriod=10 Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.067697 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.067894 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9tzk\" (UniqueName: \"kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.068056 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.068120 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.113596 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.170056 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.170587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.170746 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9tzk\" (UniqueName: \"kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.170850 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.170942 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.173222 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.183024 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.183046 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.194624 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9tzk\" (UniqueName: \"kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk\") pod \"openstackclient\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.275564 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.334100 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.350849 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.365348 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.366793 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.382363 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.458936 4979 generic.go:334] "Generic (PLEG): container finished" podID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerID="87f8bcdd0e14129a26f5189ed15ff85e52384caaf6a89397573a159ccff40e22" exitCode=0 Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.459023 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerDied","Data":"87f8bcdd0e14129a26f5189ed15ff85e52384caaf6a89397573a159ccff40e22"} Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.460956 4979 generic.go:334] "Generic (PLEG): container finished" podID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerID="079a608dc24b31a4f88315dc45c4eac9e51e3ae04392a654b2e5881b47f5deea" exitCode=0 Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.461268 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="cinder-scheduler" containerID="cri-o://dd60a59ae6cdfbc405f90d689ab84d25f406577d4b685fef4f1f04460e816ffb" gracePeriod=30 Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.461412 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="probe" containerID="cri-o://af1e56adf69dc8dcae71e643ccc863182f7586ad5f57a96be638e265eb505d2d" gracePeriod=30 Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.461084 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" event={"ID":"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8","Type":"ContainerDied","Data":"079a608dc24b31a4f88315dc45c4eac9e51e3ae04392a654b2e5881b47f5deea"} Jan 30 22:03:33 crc kubenswrapper[4979]: E0130 22:03:33.473613 4979 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 22:03:33 crc kubenswrapper[4979]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_2b9a35db-944b-404f-8936-55d7bf448619_0(b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088" Netns:"/var/run/netns/386f794f-236f-447a-8437-ea21352d89c3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088;K8S_POD_UID=2b9a35db-944b-404f-8936-55d7bf448619" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/2b9a35db-944b-404f-8936-55d7bf448619]: expected pod UID "2b9a35db-944b-404f-8936-55d7bf448619" but got "82508003-60c8-463b-92a9-bc9521fcfa03" from Kube API Jan 30 22:03:33 crc kubenswrapper[4979]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 22:03:33 crc kubenswrapper[4979]: > Jan 30 22:03:33 crc kubenswrapper[4979]: E0130 22:03:33.473702 4979 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 22:03:33 crc kubenswrapper[4979]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_2b9a35db-944b-404f-8936-55d7bf448619_0(b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088" Netns:"/var/run/netns/386f794f-236f-447a-8437-ea21352d89c3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=b4b517403274d7686e4283f00f104c753337c5d2dc4b7ca8932eba20ccc8f088;K8S_POD_UID=2b9a35db-944b-404f-8936-55d7bf448619" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/2b9a35db-944b-404f-8936-55d7bf448619]: expected pod UID "2b9a35db-944b-404f-8936-55d7bf448619" but got "82508003-60c8-463b-92a9-bc9521fcfa03" from Kube API Jan 30 22:03:33 crc kubenswrapper[4979]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 22:03:33 crc kubenswrapper[4979]: > pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.478512 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t85tp\" (UniqueName: \"kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.478795 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.478966 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.479186 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.573285 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.584309 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t85tp\" (UniqueName: \"kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.584448 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.584519 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.584579 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.585702 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.592126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.595146 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.605859 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t85tp\" (UniqueName: \"kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp\") pod \"openstackclient\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.685967 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.686470 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.686552 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dsxp\" (UniqueName: \"kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.686697 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.686751 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.686799 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb\") pod \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\" (UID: \"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.697692 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp" (OuterVolumeSpecName: "kube-api-access-2dsxp") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "kube-api-access-2dsxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.737682 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.757537 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.774401 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.776934 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.793444 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.794384 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.794497 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dsxp\" (UniqueName: \"kubernetes.io/projected/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-kube-api-access-2dsxp\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.794578 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.797146 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config" (OuterVolumeSpecName: "config") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.810222 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.813511 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" (UID: "3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896246 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spxhx\" (UniqueName: \"kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896373 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896468 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896525 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896615 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896733 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.896802 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs\") pod \"fde9bde2-8262-41c5-b037-d2d4a44575f7\" (UID: \"fde9bde2-8262-41c5-b037-d2d4a44575f7\") " Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.897877 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs" (OuterVolumeSpecName: "logs") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.898616 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.898632 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde9bde2-8262-41c5-b037-d2d4a44575f7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.898646 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.908472 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx" (OuterVolumeSpecName: "kube-api-access-spxhx") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "kube-api-access-spxhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.928450 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts" (OuterVolumeSpecName: "scripts") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.991587 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:33 crc kubenswrapper[4979]: I0130 22:03:33.992689 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data" (OuterVolumeSpecName: "config-data") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.001881 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.001938 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.001955 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spxhx\" (UniqueName: \"kubernetes.io/projected/fde9bde2-8262-41c5-b037-d2d4a44575f7-kube-api-access-spxhx\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.001967 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.067136 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.086262 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fde9bde2-8262-41c5-b037-d2d4a44575f7" (UID: "fde9bde2-8262-41c5-b037-d2d4a44575f7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.103811 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.103864 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fde9bde2-8262-41c5-b037-d2d4a44575f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.282545 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.493759 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"82508003-60c8-463b-92a9-bc9521fcfa03","Type":"ContainerStarted","Data":"d1e04049c4842166c6044361f7384530e87c43b6de980410a3f541d60c5053b9"} Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.497635 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8467c9fd48-4d9pm" event={"ID":"fde9bde2-8262-41c5-b037-d2d4a44575f7","Type":"ContainerDied","Data":"3f6e1720fbdfa450cb84e7986470398e83ff14833f01c282921516d94399a109"} Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.497661 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8467c9fd48-4d9pm" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.497736 4979 scope.go:117] "RemoveContainer" containerID="87f8bcdd0e14129a26f5189ed15ff85e52384caaf6a89397573a159ccff40e22" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.500048 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" event={"ID":"3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8","Type":"ContainerDied","Data":"0212d06a744f8cd1b66d318c030ed6a7f7216496fa3e7ef430e0ba4efdf447a9"} Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.500066 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s2cv2" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.506354 4979 generic.go:334] "Generic (PLEG): container finished" podID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerID="af1e56adf69dc8dcae71e643ccc863182f7586ad5f57a96be638e265eb505d2d" exitCode=0 Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.506435 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.506458 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerDied","Data":"af1e56adf69dc8dcae71e643ccc863182f7586ad5f57a96be638e265eb505d2d"} Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.513864 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2b9a35db-944b-404f-8936-55d7bf448619" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.521881 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.528929 4979 scope.go:117] "RemoveContainer" containerID="7ad5003e1477b67c4d2b787fced03c11f214a9bb0cc53bcbe57eceed0842467d" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.551119 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.564089 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8467c9fd48-4d9pm"] Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.568764 4979 scope.go:117] "RemoveContainer" containerID="079a608dc24b31a4f88315dc45c4eac9e51e3ae04392a654b2e5881b47f5deea" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.573309 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.580821 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s2cv2"] Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.592753 4979 scope.go:117] "RemoveContainer" containerID="28fa5fdce3759a70252b84e9d2a3128dd1ea647aeca78f30af0e925e772a5b64" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.615235 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9tzk\" (UniqueName: \"kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk\") pod \"2b9a35db-944b-404f-8936-55d7bf448619\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.615423 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle\") pod \"2b9a35db-944b-404f-8936-55d7bf448619\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.615504 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret\") pod \"2b9a35db-944b-404f-8936-55d7bf448619\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.615559 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config\") pod \"2b9a35db-944b-404f-8936-55d7bf448619\" (UID: \"2b9a35db-944b-404f-8936-55d7bf448619\") " Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.617330 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2b9a35db-944b-404f-8936-55d7bf448619" (UID: "2b9a35db-944b-404f-8936-55d7bf448619"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.621317 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2b9a35db-944b-404f-8936-55d7bf448619" (UID: "2b9a35db-944b-404f-8936-55d7bf448619"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.621518 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk" (OuterVolumeSpecName: "kube-api-access-x9tzk") pod "2b9a35db-944b-404f-8936-55d7bf448619" (UID: "2b9a35db-944b-404f-8936-55d7bf448619"). InnerVolumeSpecName "kube-api-access-x9tzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.622308 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b9a35db-944b-404f-8936-55d7bf448619" (UID: "2b9a35db-944b-404f-8936-55d7bf448619"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.718919 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9tzk\" (UniqueName: \"kubernetes.io/projected/2b9a35db-944b-404f-8936-55d7bf448619-kube-api-access-x9tzk\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.718969 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.718979 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:34 crc kubenswrapper[4979]: I0130 22:03:34.718988 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2b9a35db-944b-404f-8936-55d7bf448619-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:35 crc kubenswrapper[4979]: I0130 22:03:35.085958 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9a35db-944b-404f-8936-55d7bf448619" path="/var/lib/kubelet/pods/2b9a35db-944b-404f-8936-55d7bf448619/volumes" Jan 30 22:03:35 crc kubenswrapper[4979]: I0130 22:03:35.086922 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" path="/var/lib/kubelet/pods/3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8/volumes" Jan 30 22:03:35 crc kubenswrapper[4979]: I0130 22:03:35.087739 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" path="/var/lib/kubelet/pods/fde9bde2-8262-41c5-b037-d2d4a44575f7/volumes" Jan 30 22:03:35 crc kubenswrapper[4979]: I0130 22:03:35.525915 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:03:35 crc kubenswrapper[4979]: I0130 22:03:35.537156 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2b9a35db-944b-404f-8936-55d7bf448619" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" Jan 30 22:03:37 crc kubenswrapper[4979]: I0130 22:03:37.561318 4979 generic.go:334] "Generic (PLEG): container finished" podID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerID="dd60a59ae6cdfbc405f90d689ab84d25f406577d4b685fef4f1f04460e816ffb" exitCode=0 Jan 30 22:03:37 crc kubenswrapper[4979]: I0130 22:03:37.561596 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerDied","Data":"dd60a59ae6cdfbc405f90d689ab84d25f406577d4b685fef4f1f04460e816ffb"} Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.009831 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.103107 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104011 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104092 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104170 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7trcb\" (UniqueName: \"kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104204 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104237 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle\") pod \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\" (UID: \"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0\") " Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.104935 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.113533 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts" (OuterVolumeSpecName: "scripts") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.113662 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.124287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb" (OuterVolumeSpecName: "kube-api-access-7trcb") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "kube-api-access-7trcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.170466 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.209585 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7trcb\" (UniqueName: \"kubernetes.io/projected/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-kube-api-access-7trcb\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.209637 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.209652 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.209663 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.262418 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data" (OuterVolumeSpecName: "config-data") pod "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" (UID: "8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.311744 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.576373 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0","Type":"ContainerDied","Data":"9e233c467b56b274cf91a0fd383468a12ee48c944ec900a8f2ba3fafe0a3e4a7"} Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.576470 4979 scope.go:117] "RemoveContainer" containerID="af1e56adf69dc8dcae71e643ccc863182f7586ad5f57a96be638e265eb505d2d" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.576483 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.615256 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.625090 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.647647 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.649069 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="probe" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.649188 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="probe" Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.649296 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-api" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.649428 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-api" Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.649527 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-log" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.649596 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-log" Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.649672 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="cinder-scheduler" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.649741 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="cinder-scheduler" Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.649828 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="init" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.649899 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="init" Jan 30 22:03:38 crc kubenswrapper[4979]: E0130 22:03:38.650160 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="dnsmasq-dns" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650257 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="dnsmasq-dns" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650582 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-log" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650673 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="cinder-scheduler" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650762 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc2890a-dab6-4a6e-a7fd-a26feb5b2bb8" containerName="dnsmasq-dns" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650842 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde9bde2-8262-41c5-b037-d2d4a44575f7" containerName="placement-api" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.650925 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" containerName="probe" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.652482 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.657867 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.663975 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719424 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719501 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719532 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719560 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nsx\" (UniqueName: \"kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719581 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.719697 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.823935 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824122 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824221 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824254 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824279 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4nsx\" (UniqueName: \"kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.824437 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.836525 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.836604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.836786 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.837289 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.843728 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4nsx\" (UniqueName: \"kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx\") pod \"cinder-scheduler-0\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " pod="openstack/cinder-scheduler-0" Jan 30 22:03:38 crc kubenswrapper[4979]: I0130 22:03:38.981527 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.093825 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0" path="/var/lib/kubelet/pods/8ac0bbe3-c81f-4d00-bfd1-add18e4f49a0/volumes" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.687742 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.691608 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.697479 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.699990 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.702679 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.711988 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.752558 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.752640 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.752735 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.752760 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2cqk\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.753156 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.753264 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.753383 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.753433 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.855801 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856246 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856270 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856294 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856331 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2cqk\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856350 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.856963 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.857536 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.857612 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.857899 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.863453 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.864139 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.867705 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.875743 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.876330 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:39 crc kubenswrapper[4979]: I0130 22:03:39.876676 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2cqk\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk\") pod \"swift-proxy-6d7cdf56b7-lf2dc\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:40 crc kubenswrapper[4979]: I0130 22:03:40.019911 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:40 crc kubenswrapper[4979]: I0130 22:03:40.171350 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.737874 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.738681 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-central-agent" containerID="cri-o://d353219cc9b3f8542020689ad8fe1dc4cafe48d65da929904d82b00146b5cd56" gracePeriod=30 Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.739242 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-notification-agent" containerID="cri-o://8198ed5db540cb004bee8f636d59637892dad01fde8c2addcb3d150233b81eb8" gracePeriod=30 Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.739284 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="sg-core" containerID="cri-o://379541b071bcc3ff3b76c9a28614a8a7781d3946bd15e75deec7d7faf821f69f" gracePeriod=30 Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.739492 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" containerID="cri-o://b4264ee1205b9f14594303d45e026381cac0a39c9757db5ba8d73f991ffb0e32" gracePeriod=30 Jan 30 22:03:41 crc kubenswrapper[4979]: I0130 22:03:41.756327 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": EOF" Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.553809 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658288 4979 generic.go:334] "Generic (PLEG): container finished" podID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerID="b4264ee1205b9f14594303d45e026381cac0a39c9757db5ba8d73f991ffb0e32" exitCode=0 Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658347 4979 generic.go:334] "Generic (PLEG): container finished" podID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerID="379541b071bcc3ff3b76c9a28614a8a7781d3946bd15e75deec7d7faf821f69f" exitCode=2 Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658359 4979 generic.go:334] "Generic (PLEG): container finished" podID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerID="d353219cc9b3f8542020689ad8fe1dc4cafe48d65da929904d82b00146b5cd56" exitCode=0 Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658391 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerDied","Data":"b4264ee1205b9f14594303d45e026381cac0a39c9757db5ba8d73f991ffb0e32"} Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658434 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerDied","Data":"379541b071bcc3ff3b76c9a28614a8a7781d3946bd15e75deec7d7faf821f69f"} Jan 30 22:03:42 crc kubenswrapper[4979]: I0130 22:03:42.658452 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerDied","Data":"d353219cc9b3f8542020689ad8fe1dc4cafe48d65da929904d82b00146b5cd56"} Jan 30 22:03:43 crc kubenswrapper[4979]: I0130 22:03:43.550164 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": dial tcp 10.217.0.163:3000: connect: connection refused" Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.019472 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.126441 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.126765 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-575496bbc6-tpmv9" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-api" containerID="cri-o://31b519ed42ee2d318c5e8593b192627b5f74f877124ccf9521649301b379434d" gracePeriod=30 Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.126945 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-575496bbc6-tpmv9" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-httpd" containerID="cri-o://a9f9f27cea01a15c9754036e794a52a02aaf9c4cde1417cb268dd678a86d49a7" gracePeriod=30 Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.703098 4979 scope.go:117] "RemoveContainer" containerID="dd60a59ae6cdfbc405f90d689ab84d25f406577d4b685fef4f1f04460e816ffb" Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.712120 4979 generic.go:334] "Generic (PLEG): container finished" podID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerID="a9f9f27cea01a15c9754036e794a52a02aaf9c4cde1417cb268dd678a86d49a7" exitCode=0 Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.712249 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerDied","Data":"a9f9f27cea01a15c9754036e794a52a02aaf9c4cde1417cb268dd678a86d49a7"} Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.725244 4979 generic.go:334] "Generic (PLEG): container finished" podID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerID="8198ed5db540cb004bee8f636d59637892dad01fde8c2addcb3d150233b81eb8" exitCode=0 Jan 30 22:03:46 crc kubenswrapper[4979]: I0130 22:03:46.725299 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerDied","Data":"8198ed5db540cb004bee8f636d59637892dad01fde8c2addcb3d150233b81eb8"} Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.043521 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.125832 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.125904 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.125990 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.126063 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.126084 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fnwm\" (UniqueName: \"kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.126815 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.127408 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle\") pod \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\" (UID: \"ed53d4b7-eca6-4720-95ca-82db55e50fe7\") " Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.127452 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.127706 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.128192 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.128209 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed53d4b7-eca6-4720-95ca-82db55e50fe7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.132651 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm" (OuterVolumeSpecName: "kube-api-access-9fnwm") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "kube-api-access-9fnwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.157280 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts" (OuterVolumeSpecName: "scripts") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.167320 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.227975 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.232999 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.233040 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.233056 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fnwm\" (UniqueName: \"kubernetes.io/projected/ed53d4b7-eca6-4720-95ca-82db55e50fe7-kube-api-access-9fnwm\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.233066 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.252633 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data" (OuterVolumeSpecName: "config-data") pod "ed53d4b7-eca6-4720-95ca-82db55e50fe7" (UID: "ed53d4b7-eca6-4720-95ca-82db55e50fe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.294591 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:03:47 crc kubenswrapper[4979]: W0130 22:03:47.306995 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21dfd874_e50d_4e61_a634_9f47ee92ff4f.slice/crio-7b54d9cd9b678a4fb7d379f7d73256fcce04b1be22cb1e39a15b4c8b5b614aed WatchSource:0}: Error finding container 7b54d9cd9b678a4fb7d379f7d73256fcce04b1be22cb1e39a15b4c8b5b614aed: Status 404 returned error can't find the container with id 7b54d9cd9b678a4fb7d379f7d73256fcce04b1be22cb1e39a15b4c8b5b614aed Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.335346 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed53d4b7-eca6-4720-95ca-82db55e50fe7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.559930 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.746364 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"82508003-60c8-463b-92a9-bc9521fcfa03","Type":"ContainerStarted","Data":"6ccf84aaaded71906e123ab07138f1d46a5f8b45f0e088139ccd8642a91c4d8c"} Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.749521 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerStarted","Data":"b64735411ca3cd7394e31868ccdaa7a77e584aec6259c66bd68d292da88aa3c5"} Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.765736 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.276291577 podStartE2EDuration="14.765714818s" podCreationTimestamp="2026-01-30 22:03:33 +0000 UTC" firstStartedPulling="2026-01-30 22:03:34.279272995 +0000 UTC m=+1410.240520028" lastFinishedPulling="2026-01-30 22:03:46.768696236 +0000 UTC m=+1422.729943269" observedRunningTime="2026-01-30 22:03:47.764113195 +0000 UTC m=+1423.725360238" watchObservedRunningTime="2026-01-30 22:03:47.765714818 +0000 UTC m=+1423.726961851" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.777923 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerStarted","Data":"7b54d9cd9b678a4fb7d379f7d73256fcce04b1be22cb1e39a15b4c8b5b614aed"} Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.794577 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed53d4b7-eca6-4720-95ca-82db55e50fe7","Type":"ContainerDied","Data":"6dbed89dcb99abab4522a3860a00ee5c7bea5cb37a875572e8e74067b72a1d9c"} Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.794685 4979 scope.go:117] "RemoveContainer" containerID="b4264ee1205b9f14594303d45e026381cac0a39c9757db5ba8d73f991ffb0e32" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.794884 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.852692 4979 scope.go:117] "RemoveContainer" containerID="379541b071bcc3ff3b76c9a28614a8a7781d3946bd15e75deec7d7faf821f69f" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.874289 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.889937 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.897978 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:47 crc kubenswrapper[4979]: E0130 22:03:47.898452 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898475 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" Jan 30 22:03:47 crc kubenswrapper[4979]: E0130 22:03:47.898489 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-notification-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898518 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-notification-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: E0130 22:03:47.898525 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="sg-core" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898532 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="sg-core" Jan 30 22:03:47 crc kubenswrapper[4979]: E0130 22:03:47.898545 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-central-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898551 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-central-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898745 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-notification-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898762 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="sg-core" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898771 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="ceilometer-central-agent" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.898781 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" containerName="proxy-httpd" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.900512 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.907663 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.916736 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.922212 4979 scope.go:117] "RemoveContainer" containerID="8198ed5db540cb004bee8f636d59637892dad01fde8c2addcb3d150233b81eb8" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.934842 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977375 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977432 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977468 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977628 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977748 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977769 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7j6x\" (UniqueName: \"kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:47 crc kubenswrapper[4979]: I0130 22:03:47.977796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.000922 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.010339 4979 scope.go:117] "RemoveContainer" containerID="d353219cc9b3f8542020689ad8fe1dc4cafe48d65da929904d82b00146b5cd56" Jan 30 22:03:48 crc kubenswrapper[4979]: E0130 22:03:48.011224 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-k7j6x log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="be55c985-e7f8-499c-9ae4-3b96b20d1847" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.079904 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.079975 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7j6x\" (UniqueName: \"kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080080 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080224 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080257 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080280 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080341 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.080602 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.084306 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.086595 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.087249 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.088199 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.089900 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.103465 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7j6x\" (UniqueName: \"kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x\") pod \"ceilometer-0\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.822533 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.823419 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-log" containerID="cri-o://87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d" gracePeriod=30 Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.823557 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-httpd" containerID="cri-o://14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd" gracePeriod=30 Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.832703 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerStarted","Data":"6a656e436b19b339c0c277b8bbce77e23d12a120c342e1158752b1f56079e1d7"} Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.832773 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerStarted","Data":"b5cd75c070f4563e5400007f2a3b5fc99f54b10f69882167ae699e694edff112"} Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.832909 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.832951 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.842447 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerStarted","Data":"3c9f500d96b7f2b3e97c54f28c77ed3aa52150d439c4b7859470421455c33714"} Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.846193 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.862276 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" podStartSLOduration=9.862258749 podStartE2EDuration="9.862258749s" podCreationTimestamp="2026-01-30 22:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:48.861275423 +0000 UTC m=+1424.822522456" watchObservedRunningTime="2026-01-30 22:03:48.862258749 +0000 UTC m=+1424.823505782" Jan 30 22:03:48 crc kubenswrapper[4979]: I0130 22:03:48.868783 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004138 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004277 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7j6x\" (UniqueName: \"kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004408 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004567 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004620 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004656 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004693 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml\") pod \"be55c985-e7f8-499c-9ae4-3b96b20d1847\" (UID: \"be55c985-e7f8-499c-9ae4-3b96b20d1847\") " Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004750 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.004971 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.005191 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.005210 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be55c985-e7f8-499c-9ae4-3b96b20d1847-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.012525 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x" (OuterVolumeSpecName: "kube-api-access-k7j6x") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "kube-api-access-k7j6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.027283 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data" (OuterVolumeSpecName: "config-data") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.027811 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts" (OuterVolumeSpecName: "scripts") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.030164 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.030281 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be55c985-e7f8-499c-9ae4-3b96b20d1847" (UID: "be55c985-e7f8-499c-9ae4-3b96b20d1847"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.083344 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed53d4b7-eca6-4720-95ca-82db55e50fe7" path="/var/lib/kubelet/pods/ed53d4b7-eca6-4720-95ca-82db55e50fe7/volumes" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.107543 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.107584 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.107595 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.107605 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be55c985-e7f8-499c-9ae4-3b96b20d1847-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.107614 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7j6x\" (UniqueName: \"kubernetes.io/projected/be55c985-e7f8-499c-9ae4-3b96b20d1847-kube-api-access-k7j6x\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.903587 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerStarted","Data":"998a3106aba2ac42665d88c13615a533640da17728cf5d2d8129a1a9548dfb1e"} Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.907044 4979 generic.go:334] "Generic (PLEG): container finished" podID="c3b83faf-96cc-4787-814f-774416ea9811" containerID="87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d" exitCode=143 Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.907149 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerDied","Data":"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d"} Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.907192 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.942837 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=11.942807511 podStartE2EDuration="11.942807511s" podCreationTimestamp="2026-01-30 22:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:49.934164529 +0000 UTC m=+1425.895411572" watchObservedRunningTime="2026-01-30 22:03:49.942807511 +0000 UTC m=+1425.904054544" Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.983517 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:49 crc kubenswrapper[4979]: I0130 22:03:49.991636 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.028111 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.031153 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.038405 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.039543 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.068608 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.163517 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.164939 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.165093 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.165211 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.165297 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msbfp\" (UniqueName: \"kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.165413 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.165502 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267480 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267559 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267597 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267638 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msbfp\" (UniqueName: \"kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267715 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267751 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.267808 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.268338 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.268674 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.276303 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.276927 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.277582 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.282046 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.310416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msbfp\" (UniqueName: \"kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp\") pod \"ceilometer-0\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.360428 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:03:50 crc kubenswrapper[4979]: I0130 22:03:50.931373 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:50 crc kubenswrapper[4979]: W0130 22:03:50.934146 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91a73a79_d17b_4370_a554_acccc33344ba.slice/crio-05d263b2ae4bb6d568013dad8e91f1c9cdedcc9f40a1f8559678d317541ba867 WatchSource:0}: Error finding container 05d263b2ae4bb6d568013dad8e91f1c9cdedcc9f40a1f8559678d317541ba867: Status 404 returned error can't find the container with id 05d263b2ae4bb6d568013dad8e91f1c9cdedcc9f40a1f8559678d317541ba867 Jan 30 22:03:51 crc kubenswrapper[4979]: I0130 22:03:51.082158 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be55c985-e7f8-499c-9ae4-3b96b20d1847" path="/var/lib/kubelet/pods/be55c985-e7f8-499c-9ae4-3b96b20d1847/volumes" Jan 30 22:03:51 crc kubenswrapper[4979]: I0130 22:03:51.832759 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:51 crc kubenswrapper[4979]: I0130 22:03:51.833724 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-log" containerID="cri-o://7f78fdfb980e393a32d3e4e14baa1b2c7a2c7e241035d08dc24473d3ebce5a53" gracePeriod=30 Jan 30 22:03:51 crc kubenswrapper[4979]: I0130 22:03:51.834303 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-httpd" containerID="cri-o://24204e17d4c44358eb3ce3054f01712860fc845201cf5a59bbd0c9532f6409e6" gracePeriod=30 Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.027507 4979 generic.go:334] "Generic (PLEG): container finished" podID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerID="31b519ed42ee2d318c5e8593b192627b5f74f877124ccf9521649301b379434d" exitCode=0 Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.028125 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerDied","Data":"31b519ed42ee2d318c5e8593b192627b5f74f877124ccf9521649301b379434d"} Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.032330 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerStarted","Data":"05d263b2ae4bb6d568013dad8e91f1c9cdedcc9f40a1f8559678d317541ba867"} Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.671986 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.772180 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config\") pod \"ba4b7345-9c9c-46e9-ac9a-d84093867012\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.772401 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qjbm\" (UniqueName: \"kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm\") pod \"ba4b7345-9c9c-46e9-ac9a-d84093867012\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.772565 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle\") pod \"ba4b7345-9c9c-46e9-ac9a-d84093867012\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.772630 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs\") pod \"ba4b7345-9c9c-46e9-ac9a-d84093867012\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.772675 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config\") pod \"ba4b7345-9c9c-46e9-ac9a-d84093867012\" (UID: \"ba4b7345-9c9c-46e9-ac9a-d84093867012\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.782817 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm" (OuterVolumeSpecName: "kube-api-access-4qjbm") pod "ba4b7345-9c9c-46e9-ac9a-d84093867012" (UID: "ba4b7345-9c9c-46e9-ac9a-d84093867012"). InnerVolumeSpecName "kube-api-access-4qjbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.787675 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ba4b7345-9c9c-46e9-ac9a-d84093867012" (UID: "ba4b7345-9c9c-46e9-ac9a-d84093867012"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.848261 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config" (OuterVolumeSpecName: "config") pod "ba4b7345-9c9c-46e9-ac9a-d84093867012" (UID: "ba4b7345-9c9c-46e9-ac9a-d84093867012"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.854701 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba4b7345-9c9c-46e9-ac9a-d84093867012" (UID: "ba4b7345-9c9c-46e9-ac9a-d84093867012"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.861551 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874366 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874447 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874543 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874582 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874612 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.874659 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj67s\" (UniqueName: \"kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.875647 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.875671 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.875680 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.875689 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qjbm\" (UniqueName: \"kubernetes.io/projected/ba4b7345-9c9c-46e9-ac9a-d84093867012-kube-api-access-4qjbm\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.878806 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts" (OuterVolumeSpecName: "scripts") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.891594 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs" (OuterVolumeSpecName: "logs") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.895346 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s" (OuterVolumeSpecName: "kube-api-access-hj67s") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "kube-api-access-hj67s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.906558 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ba4b7345-9c9c-46e9-ac9a-d84093867012" (UID: "ba4b7345-9c9c-46e9-ac9a-d84093867012"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.906724 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.948515 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data" (OuterVolumeSpecName: "config-data") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.949201 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.975946 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976022 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run\") pod \"c3b83faf-96cc-4787-814f-774416ea9811\" (UID: \"c3b83faf-96cc-4787-814f-774416ea9811\") " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976328 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976362 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976374 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976395 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976414 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976423 4979 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4b7345-9c9c-46e9-ac9a-d84093867012-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.976433 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj67s\" (UniqueName: \"kubernetes.io/projected/c3b83faf-96cc-4787-814f-774416ea9811-kube-api-access-hj67s\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.981804 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:52 crc kubenswrapper[4979]: I0130 22:03:52.996661 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.007675 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.042111 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3b83faf-96cc-4787-814f-774416ea9811" (UID: "c3b83faf-96cc-4787-814f-774416ea9811"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.047994 4979 generic.go:334] "Generic (PLEG): container finished" podID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerID="7f78fdfb980e393a32d3e4e14baa1b2c7a2c7e241035d08dc24473d3ebce5a53" exitCode=143 Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.048106 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerDied","Data":"7f78fdfb980e393a32d3e4e14baa1b2c7a2c7e241035d08dc24473d3ebce5a53"} Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.050552 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-575496bbc6-tpmv9" event={"ID":"ba4b7345-9c9c-46e9-ac9a-d84093867012","Type":"ContainerDied","Data":"d5b0558da39d39eea1a978ceb8d04e793a4cb1b04e75dc57e8d0bbef896534cc"} Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.050637 4979 scope.go:117] "RemoveContainer" containerID="a9f9f27cea01a15c9754036e794a52a02aaf9c4cde1417cb268dd678a86d49a7" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.050862 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-575496bbc6-tpmv9" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.059235 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerStarted","Data":"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1"} Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.063093 4979 generic.go:334] "Generic (PLEG): container finished" podID="c3b83faf-96cc-4787-814f-774416ea9811" containerID="14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd" exitCode=0 Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.063143 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerDied","Data":"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd"} Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.063179 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3b83faf-96cc-4787-814f-774416ea9811","Type":"ContainerDied","Data":"3e810c936e02f2844b80a87456dccb9adbb5f44faaa30ddef373326002018cd3"} Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.063228 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.079817 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b83faf-96cc-4787-814f-774416ea9811-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.079859 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3b83faf-96cc-4787-814f-774416ea9811-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.079871 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.089537 4979 scope.go:117] "RemoveContainer" containerID="31b519ed42ee2d318c5e8593b192627b5f74f877124ccf9521649301b379434d" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.138994 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.146766 4979 scope.go:117] "RemoveContainer" containerID="14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.169300 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.178466 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.184315 4979 scope.go:117] "RemoveContainer" containerID="87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.193495 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-575496bbc6-tpmv9"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.207189 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.207831 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.207853 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.207879 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-api" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.207887 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-api" Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.207909 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.207920 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.207943 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-log" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.207951 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-log" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.208194 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.208209 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-log" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.208221 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" containerName="neutron-api" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.208238 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b83faf-96cc-4787-814f-774416ea9811" containerName="glance-httpd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.209521 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.212454 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.212773 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.234023 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.242077 4979 scope.go:117] "RemoveContainer" containerID="14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd" Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.247880 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd\": container with ID starting with 14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd not found: ID does not exist" containerID="14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.247943 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd"} err="failed to get container status \"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd\": rpc error: code = NotFound desc = could not find container \"14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd\": container with ID starting with 14405f0e0f27c5af1ee484fb621bb6cae8cfbdfa6defecc1fe6e7c9f034b13bd not found: ID does not exist" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.247980 4979 scope.go:117] "RemoveContainer" containerID="87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d" Jan 30 22:03:53 crc kubenswrapper[4979]: E0130 22:03:53.251281 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d\": container with ID starting with 87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d not found: ID does not exist" containerID="87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.251337 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d"} err="failed to get container status \"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d\": rpc error: code = NotFound desc = could not find container \"87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d\": container with ID starting with 87fdf39070dbb9272a48ba4a524fccd75c7aa10f74aa0ace463c45766d8c7b4d not found: ID does not exist" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.385489 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.385764 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.385833 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.385875 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9nbw\" (UniqueName: \"kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.385929 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.386152 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.386251 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.386276 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.487879 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488358 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488388 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488869 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9nbw\" (UniqueName: \"kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488905 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488952 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488980 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.488997 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.489352 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.490273 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.490337 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.494534 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.498754 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.499763 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.501315 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.509844 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9nbw\" (UniqueName: \"kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.533998 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.549009 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:03:53 crc kubenswrapper[4979]: I0130 22:03:53.982650 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 22:03:54 crc kubenswrapper[4979]: I0130 22:03:54.077869 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerStarted","Data":"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637"} Jan 30 22:03:54 crc kubenswrapper[4979]: I0130 22:03:54.077914 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerStarted","Data":"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd"} Jan 30 22:03:54 crc kubenswrapper[4979]: I0130 22:03:54.139651 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:03:54 crc kubenswrapper[4979]: I0130 22:03:54.272159 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.037494 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.039640 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.089348 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba4b7345-9c9c-46e9-ac9a-d84093867012" path="/var/lib/kubelet/pods/ba4b7345-9c9c-46e9-ac9a-d84093867012/volumes" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.090762 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b83faf-96cc-4787-814f-774416ea9811" path="/var/lib/kubelet/pods/c3b83faf-96cc-4787-814f-774416ea9811/volumes" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.102477 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerStarted","Data":"2764ceb6c35ea2f48a0d751046545351bbcae998483bb75989d6728581aa19d8"} Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.102546 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerStarted","Data":"1ef7dfba2654b435b80b29127f1c9700a1f54fff7b56b29307a2ed4beab2ff4b"} Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.471172 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qr8n5"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.472546 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.487523 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qr8n5"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.543906 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.544234 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk22r\" (UniqueName: \"kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.592948 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jjtrg"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.594542 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.608518 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1082-account-create-update-drkzw"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.610189 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.618088 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.620837 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jjtrg"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.645095 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1082-account-create-update-drkzw"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.647297 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zkhl\" (UniqueName: \"kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.647332 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.647383 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.647446 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk22r\" (UniqueName: \"kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.656500 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.683438 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk22r\" (UniqueName: \"kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r\") pod \"nova-api-db-create-qr8n5\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.757097 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.757194 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zkhl\" (UniqueName: \"kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.757219 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.757317 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpsgq\" (UniqueName: \"kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.758486 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.794219 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.800396 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fgz9b"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.801876 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.807082 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zkhl\" (UniqueName: \"kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl\") pod \"nova-cell0-db-create-jjtrg\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.814169 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fgz9b"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.843108 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-504c-account-create-update-m57kd"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.844612 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.851443 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.859687 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpsgq\" (UniqueName: \"kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.859984 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg9qp\" (UniqueName: \"kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.860129 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.860285 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5pz\" (UniqueName: \"kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.860384 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.860463 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.861534 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.879541 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-m57kd"] Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.892180 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpsgq\" (UniqueName: \"kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq\") pod \"nova-api-1082-account-create-update-drkzw\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.982872 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5pz\" (UniqueName: \"kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.983065 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.983171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.983322 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg9qp\" (UniqueName: \"kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:55 crc kubenswrapper[4979]: I0130 22:03:55.984539 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.001195 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.018132 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg9qp\" (UniqueName: \"kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp\") pod \"nova-cell0-504c-account-create-update-m57kd\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.021627 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5pz\" (UniqueName: \"kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz\") pod \"nova-cell1-db-create-fgz9b\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.022890 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-016f-account-create-update-brzlt"] Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.025980 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.033208 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.050869 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.053513 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-brzlt"] Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.064123 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.077715 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.100071 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.100285 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4zkb\" (UniqueName: \"kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.149768 4979 generic.go:334] "Generic (PLEG): container finished" podID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerID="24204e17d4c44358eb3ce3054f01712860fc845201cf5a59bbd0c9532f6409e6" exitCode=0 Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.149897 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerDied","Data":"24204e17d4c44358eb3ce3054f01712860fc845201cf5a59bbd0c9532f6409e6"} Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.173427 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerStarted","Data":"aa559b1135f6618404d0e60d9a772fc66e419ae78eeefe9bc432ad7bad847635"} Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.208234 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.208341 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4zkb\" (UniqueName: \"kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.210309 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.210278575 podStartE2EDuration="3.210278575s" podCreationTimestamp="2026-01-30 22:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:56.206625228 +0000 UTC m=+1432.167872261" watchObservedRunningTime="2026-01-30 22:03:56.210278575 +0000 UTC m=+1432.171525598" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.217597 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.236216 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4zkb\" (UniqueName: \"kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb\") pod \"nova-cell1-016f-account-create-update-brzlt\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.306822 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.311624 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.380087 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.412900 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.412982 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413021 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdgd\" (UniqueName: \"kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413144 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413189 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413259 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413353 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.413398 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run\") pod \"6e002e48-1108-41f0-a1de-5a6b89d9e534\" (UID: \"6e002e48-1108-41f0-a1de-5a6b89d9e534\") " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.416555 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs" (OuterVolumeSpecName: "logs") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.418559 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.420474 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd" (OuterVolumeSpecName: "kube-api-access-htdgd") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "kube-api-access-htdgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.422377 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts" (OuterVolumeSpecName: "scripts") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.427294 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.516743 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.516777 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.516786 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.516795 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e002e48-1108-41f0-a1de-5a6b89d9e534-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.516807 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htdgd\" (UniqueName: \"kubernetes.io/projected/6e002e48-1108-41f0-a1de-5a6b89d9e534-kube-api-access-htdgd\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.526758 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.579800 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data" (OuterVolumeSpecName: "config-data") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.620217 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.620704 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.629277 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.642296 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6e002e48-1108-41f0-a1de-5a6b89d9e534" (UID: "6e002e48-1108-41f0-a1de-5a6b89d9e534"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.722748 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e002e48-1108-41f0-a1de-5a6b89d9e534-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.722784 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:56 crc kubenswrapper[4979]: I0130 22:03:56.903300 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qr8n5"] Jan 30 22:03:57 crc kubenswrapper[4979]: W0130 22:03:57.088717 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a8a7dfa_7a48_4b28_b2c1_22ae610f004a.slice/crio-df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951 WatchSource:0}: Error finding container df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951: Status 404 returned error can't find the container with id df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.092152 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1082-account-create-update-drkzw"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.096526 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jjtrg"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.104721 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-m57kd"] Jan 30 22:03:57 crc kubenswrapper[4979]: W0130 22:03:57.108115 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb76b95_4c2d_478d_b9d9_e6e182859ccd.slice/crio-20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351 WatchSource:0}: Error finding container 20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351: Status 404 returned error can't find the container with id 20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.230552 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fgz9b"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.259048 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-brzlt"] Jan 30 22:03:57 crc kubenswrapper[4979]: W0130 22:03:57.259242 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabec2c46_a984_4314_88c5_d50d20ef7f8d.slice/crio-638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc WatchSource:0}: Error finding container 638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc: Status 404 returned error can't find the container with id 638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.289709 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerStarted","Data":"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.289913 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-central-agent" containerID="cri-o://b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1" gracePeriod=30 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.289998 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.290081 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="proxy-httpd" containerID="cri-o://07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468" gracePeriod=30 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.290150 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-notification-agent" containerID="cri-o://c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd" gracePeriod=30 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.290270 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="sg-core" containerID="cri-o://8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637" gracePeriod=30 Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.299388 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qr8n5" event={"ID":"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e","Type":"ContainerStarted","Data":"5287613e36eb65b9ace85e182d98569185f491a0c8401f643ff7f5d20d7ff1a1"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.307283 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jjtrg" event={"ID":"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a","Type":"ContainerStarted","Data":"df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.313797 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-504c-account-create-update-m57kd" event={"ID":"bd648327-e40d-4f17-9366-1773fa95f47a","Type":"ContainerStarted","Data":"04b227162d1780e1e9d4e54a32ca21d9c900228e88804dd47aed9db864e05510"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.320788 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.838694013 podStartE2EDuration="8.32077001s" podCreationTimestamp="2026-01-30 22:03:49 +0000 UTC" firstStartedPulling="2026-01-30 22:03:50.937209343 +0000 UTC m=+1426.898456376" lastFinishedPulling="2026-01-30 22:03:56.41928534 +0000 UTC m=+1432.380532373" observedRunningTime="2026-01-30 22:03:57.318935871 +0000 UTC m=+1433.280182904" watchObservedRunningTime="2026-01-30 22:03:57.32077001 +0000 UTC m=+1433.282017033" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.334873 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6e002e48-1108-41f0-a1de-5a6b89d9e534","Type":"ContainerDied","Data":"deeb60df8742bac120d13441d56bc1b6e0ead1fd468b98aebf5923cd40c71e08"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.334940 4979 scope.go:117] "RemoveContainer" containerID="24204e17d4c44358eb3ce3054f01712860fc845201cf5a59bbd0c9532f6409e6" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.335045 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.340176 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1082-account-create-update-drkzw" event={"ID":"adb76b95-4c2d-478d-b9d9-e6e182859ccd","Type":"ContainerStarted","Data":"20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351"} Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.354725 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-qr8n5" podStartSLOduration=2.35470176 podStartE2EDuration="2.35470176s" podCreationTimestamp="2026-01-30 22:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:03:57.34577304 +0000 UTC m=+1433.307020083" watchObservedRunningTime="2026-01-30 22:03:57.35470176 +0000 UTC m=+1433.315948793" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.404024 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.426494 4979 scope.go:117] "RemoveContainer" containerID="7f78fdfb980e393a32d3e4e14baa1b2c7a2c7e241035d08dc24473d3ebce5a53" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.442418 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.451724 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:57 crc kubenswrapper[4979]: E0130 22:03:57.452351 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-log" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.452372 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-log" Jan 30 22:03:57 crc kubenswrapper[4979]: E0130 22:03:57.452406 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-httpd" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.452413 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-httpd" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.452627 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-log" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.452639 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" containerName="glance-httpd" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.453850 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.458757 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.459088 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.486119 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550425 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550504 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550578 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550607 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550722 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.550843 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.551433 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5k8\" (UniqueName: \"kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.551719 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.655685 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df5k8\" (UniqueName: \"kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656582 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656616 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656724 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656776 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.656906 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.657676 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.657935 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.658276 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.660439 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.687359 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.690565 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.692640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.694118 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df5k8\" (UniqueName: \"kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.727656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.760168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " pod="openstack/glance-default-internal-api-0" Jan 30 22:03:57 crc kubenswrapper[4979]: I0130 22:03:57.801685 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.350336 4979 generic.go:334] "Generic (PLEG): container finished" podID="abec2c46-a984-4314-88c5-d50d20ef7f8d" containerID="d40ebbabe3d8f2995f627a1ae83a4f0a8052321d11e2329aba49ee99c9ce1294" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.350419 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-016f-account-create-update-brzlt" event={"ID":"abec2c46-a984-4314-88c5-d50d20ef7f8d","Type":"ContainerDied","Data":"d40ebbabe3d8f2995f627a1ae83a4f0a8052321d11e2329aba49ee99c9ce1294"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.350453 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-016f-account-create-update-brzlt" event={"ID":"abec2c46-a984-4314-88c5-d50d20ef7f8d","Type":"ContainerStarted","Data":"638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.352966 4979 generic.go:334] "Generic (PLEG): container finished" podID="4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" containerID="4346269c3467fb9983ba22a3da499f523fe4b5d9072377bdb3c9eadf809fe8ff" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.353024 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jjtrg" event={"ID":"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a","Type":"ContainerDied","Data":"4346269c3467fb9983ba22a3da499f523fe4b5d9072377bdb3c9eadf809fe8ff"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.354862 4979 generic.go:334] "Generic (PLEG): container finished" podID="bd648327-e40d-4f17-9366-1773fa95f47a" containerID="78e6994e836809eb6c4147c73b39f8c34653cb31054d04a758e600e5a045351d" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.354925 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-504c-account-create-update-m57kd" event={"ID":"bd648327-e40d-4f17-9366-1773fa95f47a","Type":"ContainerDied","Data":"78e6994e836809eb6c4147c73b39f8c34653cb31054d04a758e600e5a045351d"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.357937 4979 generic.go:334] "Generic (PLEG): container finished" podID="adb76b95-4c2d-478d-b9d9-e6e182859ccd" containerID="65f7df0a5f220ddf8b419657c4d7771409b9a8c3c511a14b07fabfbb8e20fede" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.357974 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1082-account-create-update-drkzw" event={"ID":"adb76b95-4c2d-478d-b9d9-e6e182859ccd","Type":"ContainerDied","Data":"65f7df0a5f220ddf8b419657c4d7771409b9a8c3c511a14b07fabfbb8e20fede"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360725 4979 generic.go:334] "Generic (PLEG): container finished" podID="91a73a79-d17b-4370-a554-acccc33344ba" containerID="07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360747 4979 generic.go:334] "Generic (PLEG): container finished" podID="91a73a79-d17b-4370-a554-acccc33344ba" containerID="8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637" exitCode=2 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360758 4979 generic.go:334] "Generic (PLEG): container finished" podID="91a73a79-d17b-4370-a554-acccc33344ba" containerID="c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360795 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerDied","Data":"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360843 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerDied","Data":"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.360867 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerDied","Data":"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.362670 4979 generic.go:334] "Generic (PLEG): container finished" podID="7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" containerID="ce15c22300306383eb564954b64ad58a13fe8c8c246e3d682e1063ba2ed2a496" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.362816 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qr8n5" event={"ID":"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e","Type":"ContainerDied","Data":"ce15c22300306383eb564954b64ad58a13fe8c8c246e3d682e1063ba2ed2a496"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.364859 4979 generic.go:334] "Generic (PLEG): container finished" podID="bc0c5054-9597-4b94-a1d6-1f424c1d6de4" containerID="d6d25ae31ed5e6d9c7cb7e6adcce8605ff98681415f720f118a7c85b8f2468e0" exitCode=0 Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.364888 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fgz9b" event={"ID":"bc0c5054-9597-4b94-a1d6-1f424c1d6de4","Type":"ContainerDied","Data":"d6d25ae31ed5e6d9c7cb7e6adcce8605ff98681415f720f118a7c85b8f2468e0"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.364920 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fgz9b" event={"ID":"bc0c5054-9597-4b94-a1d6-1f424c1d6de4","Type":"ContainerStarted","Data":"6e63b7e0b8f850b8f49982133a6589249d41e457d00781d4b3e30f84278b613a"} Jan 30 22:03:58 crc kubenswrapper[4979]: I0130 22:03:58.553631 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:03:58 crc kubenswrapper[4979]: W0130 22:03:58.561537 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaec2e945_509e_4cbb_9988_9f6cc840cd62.slice/crio-990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd WatchSource:0}: Error finding container 990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd: Status 404 returned error can't find the container with id 990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd Jan 30 22:03:59 crc kubenswrapper[4979]: I0130 22:03:59.082607 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e002e48-1108-41f0-a1de-5a6b89d9e534" path="/var/lib/kubelet/pods/6e002e48-1108-41f0-a1de-5a6b89d9e534/volumes" Jan 30 22:03:59 crc kubenswrapper[4979]: I0130 22:03:59.402327 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerStarted","Data":"3a0f2c5f20fe7df83f657bd57b9e6599013ae4fe90547daa544d3812ba096c45"} Jan 30 22:03:59 crc kubenswrapper[4979]: I0130 22:03:59.402794 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerStarted","Data":"990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd"} Jan 30 22:03:59 crc kubenswrapper[4979]: I0130 22:03:59.910574 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.034842 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts\") pod \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.038409 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk22r\" (UniqueName: \"kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r\") pod \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\" (UID: \"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.049771 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" (UID: "7743e00f-3d49-4d9f-8057-f86dc7dc8f0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.072972 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r" (OuterVolumeSpecName: "kube-api-access-xk22r") pod "7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" (UID: "7743e00f-3d49-4d9f-8057-f86dc7dc8f0e"). InnerVolumeSpecName "kube-api-access-xk22r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.162648 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk22r\" (UniqueName: \"kubernetes.io/projected/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-kube-api-access-xk22r\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.162686 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.290666 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.305459 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.324001 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.332086 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.350138 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365733 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4zkb\" (UniqueName: \"kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb\") pod \"abec2c46-a984-4314-88c5-d50d20ef7f8d\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365781 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5pz\" (UniqueName: \"kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz\") pod \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365830 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zkhl\" (UniqueName: \"kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl\") pod \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365851 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg9qp\" (UniqueName: \"kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp\") pod \"bd648327-e40d-4f17-9366-1773fa95f47a\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365906 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts\") pod \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\" (UID: \"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365936 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts\") pod \"abec2c46-a984-4314-88c5-d50d20ef7f8d\" (UID: \"abec2c46-a984-4314-88c5-d50d20ef7f8d\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.365989 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts\") pod \"bd648327-e40d-4f17-9366-1773fa95f47a\" (UID: \"bd648327-e40d-4f17-9366-1773fa95f47a\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.366042 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpsgq\" (UniqueName: \"kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq\") pod \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.366070 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts\") pod \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\" (UID: \"adb76b95-4c2d-478d-b9d9-e6e182859ccd\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.366112 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts\") pod \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\" (UID: \"bc0c5054-9597-4b94-a1d6-1f424c1d6de4\") " Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.366800 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc0c5054-9597-4b94-a1d6-1f424c1d6de4" (UID: "bc0c5054-9597-4b94-a1d6-1f424c1d6de4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.367207 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abec2c46-a984-4314-88c5-d50d20ef7f8d" (UID: "abec2c46-a984-4314-88c5-d50d20ef7f8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.367646 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" (UID: "4a8a7dfa-7a48-4b28-b2c1-22ae610f004a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.367737 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adb76b95-4c2d-478d-b9d9-e6e182859ccd" (UID: "adb76b95-4c2d-478d-b9d9-e6e182859ccd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.368270 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd648327-e40d-4f17-9366-1773fa95f47a" (UID: "bd648327-e40d-4f17-9366-1773fa95f47a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.373021 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq" (OuterVolumeSpecName: "kube-api-access-fpsgq") pod "adb76b95-4c2d-478d-b9d9-e6e182859ccd" (UID: "adb76b95-4c2d-478d-b9d9-e6e182859ccd"). InnerVolumeSpecName "kube-api-access-fpsgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.373709 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb" (OuterVolumeSpecName: "kube-api-access-g4zkb") pod "abec2c46-a984-4314-88c5-d50d20ef7f8d" (UID: "abec2c46-a984-4314-88c5-d50d20ef7f8d"). InnerVolumeSpecName "kube-api-access-g4zkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.373746 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz" (OuterVolumeSpecName: "kube-api-access-jz5pz") pod "bc0c5054-9597-4b94-a1d6-1f424c1d6de4" (UID: "bc0c5054-9597-4b94-a1d6-1f424c1d6de4"). InnerVolumeSpecName "kube-api-access-jz5pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.378231 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl" (OuterVolumeSpecName: "kube-api-access-8zkhl") pod "4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" (UID: "4a8a7dfa-7a48-4b28-b2c1-22ae610f004a"). InnerVolumeSpecName "kube-api-access-8zkhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.392520 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp" (OuterVolumeSpecName: "kube-api-access-vg9qp") pod "bd648327-e40d-4f17-9366-1773fa95f47a" (UID: "bd648327-e40d-4f17-9366-1773fa95f47a"). InnerVolumeSpecName "kube-api-access-vg9qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.417137 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qr8n5" event={"ID":"7743e00f-3d49-4d9f-8057-f86dc7dc8f0e","Type":"ContainerDied","Data":"5287613e36eb65b9ace85e182d98569185f491a0c8401f643ff7f5d20d7ff1a1"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.417187 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5287613e36eb65b9ace85e182d98569185f491a0c8401f643ff7f5d20d7ff1a1" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.417298 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qr8n5" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.420383 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fgz9b" event={"ID":"bc0c5054-9597-4b94-a1d6-1f424c1d6de4","Type":"ContainerDied","Data":"6e63b7e0b8f850b8f49982133a6589249d41e457d00781d4b3e30f84278b613a"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.420426 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e63b7e0b8f850b8f49982133a6589249d41e457d00781d4b3e30f84278b613a" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.420494 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fgz9b" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.430060 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-016f-account-create-update-brzlt" event={"ID":"abec2c46-a984-4314-88c5-d50d20ef7f8d","Type":"ContainerDied","Data":"638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.430103 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="638baf2affdffa158df758a79fadbfeab13d358c2cc9e4c139c69958a3cccdfc" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.430180 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-brzlt" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.433100 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jjtrg" event={"ID":"4a8a7dfa-7a48-4b28-b2c1-22ae610f004a","Type":"ContainerDied","Data":"df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.433159 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df18289bd21767cca478817fcd014e4e8f707dfa6f359e3706be8fecab586951" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.433888 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jjtrg" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.439683 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-504c-account-create-update-m57kd" event={"ID":"bd648327-e40d-4f17-9366-1773fa95f47a","Type":"ContainerDied","Data":"04b227162d1780e1e9d4e54a32ca21d9c900228e88804dd47aed9db864e05510"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.439726 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04b227162d1780e1e9d4e54a32ca21d9c900228e88804dd47aed9db864e05510" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.439790 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-m57kd" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.451724 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerStarted","Data":"10bc5c2d6026fb9b6e38741866768cd6cce92452ca56fb4384be71b3bffc65c0"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.456563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1082-account-create-update-drkzw" event={"ID":"adb76b95-4c2d-478d-b9d9-e6e182859ccd","Type":"ContainerDied","Data":"20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351"} Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.456593 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-drkzw" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.456611 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20f39ad0ff3d3417d68276c4a96e5fc023eb9e1315dcdefe58ee8b585f92b351" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468223 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zkhl\" (UniqueName: \"kubernetes.io/projected/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-kube-api-access-8zkhl\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468259 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg9qp\" (UniqueName: \"kubernetes.io/projected/bd648327-e40d-4f17-9366-1773fa95f47a-kube-api-access-vg9qp\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468271 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468283 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abec2c46-a984-4314-88c5-d50d20ef7f8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468293 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd648327-e40d-4f17-9366-1773fa95f47a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468302 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpsgq\" (UniqueName: \"kubernetes.io/projected/adb76b95-4c2d-478d-b9d9-e6e182859ccd-kube-api-access-fpsgq\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468314 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb76b95-4c2d-478d-b9d9-e6e182859ccd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468323 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468336 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4zkb\" (UniqueName: \"kubernetes.io/projected/abec2c46-a984-4314-88c5-d50d20ef7f8d-kube-api-access-g4zkb\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.468345 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5pz\" (UniqueName: \"kubernetes.io/projected/bc0c5054-9597-4b94-a1d6-1f424c1d6de4-kube-api-access-jz5pz\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:00 crc kubenswrapper[4979]: I0130 22:04:00.491708 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.491689419 podStartE2EDuration="3.491689419s" podCreationTimestamp="2026-01-30 22:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:00.487685422 +0000 UTC m=+1436.448932455" watchObservedRunningTime="2026-01-30 22:04:00.491689419 +0000 UTC m=+1436.452936442" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.374411 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.470574 4979 generic.go:334] "Generic (PLEG): container finished" podID="91a73a79-d17b-4370-a554-acccc33344ba" containerID="b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1" exitCode=0 Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.470699 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.470790 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerDied","Data":"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1"} Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.470842 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a73a79-d17b-4370-a554-acccc33344ba","Type":"ContainerDied","Data":"05d263b2ae4bb6d568013dad8e91f1c9cdedcc9f40a1f8559678d317541ba867"} Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.470872 4979 scope.go:117] "RemoveContainer" containerID="07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.496777 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497076 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497195 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msbfp\" (UniqueName: \"kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497344 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497438 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497474 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.497515 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle\") pod \"91a73a79-d17b-4370-a554-acccc33344ba\" (UID: \"91a73a79-d17b-4370-a554-acccc33344ba\") " Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.499852 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.500346 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.501127 4979 scope.go:117] "RemoveContainer" containerID="8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.506734 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts" (OuterVolumeSpecName: "scripts") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.507119 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp" (OuterVolumeSpecName: "kube-api-access-msbfp") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "kube-api-access-msbfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.537399 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.575608 4979 scope.go:117] "RemoveContainer" containerID="c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.600177 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.600224 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.600242 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msbfp\" (UniqueName: \"kubernetes.io/projected/91a73a79-d17b-4370-a554-acccc33344ba-kube-api-access-msbfp\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.600254 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.600265 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a73a79-d17b-4370-a554-acccc33344ba-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.610499 4979 scope.go:117] "RemoveContainer" containerID="b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.621932 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data" (OuterVolumeSpecName: "config-data") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.633214 4979 scope.go:117] "RemoveContainer" containerID="07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.633864 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468\": container with ID starting with 07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468 not found: ID does not exist" containerID="07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.633911 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468"} err="failed to get container status \"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468\": rpc error: code = NotFound desc = could not find container \"07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468\": container with ID starting with 07c78a51a6fb0b1f62c30a73b93f40387096ca7bf652ba21834db6e04ac07468 not found: ID does not exist" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.633944 4979 scope.go:117] "RemoveContainer" containerID="8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.634278 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637\": container with ID starting with 8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637 not found: ID does not exist" containerID="8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.634314 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637"} err="failed to get container status \"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637\": rpc error: code = NotFound desc = could not find container \"8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637\": container with ID starting with 8a357e29e08a0427cefd23bb1c8ff1d5d15579e0b7c1be0f7c380f535654d637 not found: ID does not exist" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.634328 4979 scope.go:117] "RemoveContainer" containerID="c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.634694 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd\": container with ID starting with c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd not found: ID does not exist" containerID="c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.634725 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd"} err="failed to get container status \"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd\": rpc error: code = NotFound desc = could not find container \"c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd\": container with ID starting with c9bedd395c2d6e88cf65634d1c3281af917fbaa0c7b452d5a3dad92332fb56cd not found: ID does not exist" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.634743 4979 scope.go:117] "RemoveContainer" containerID="b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.635072 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1\": container with ID starting with b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1 not found: ID does not exist" containerID="b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.635097 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1"} err="failed to get container status \"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1\": rpc error: code = NotFound desc = could not find container \"b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1\": container with ID starting with b86d136ef13f54d139b85bba8f166db23c859409fb812d5e44e6f3364087f7c1 not found: ID does not exist" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.647345 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91a73a79-d17b-4370-a554-acccc33344ba" (UID: "91a73a79-d17b-4370-a554-acccc33344ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.703819 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.703864 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a73a79-d17b-4370-a554-acccc33344ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.817592 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.826858 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.841900 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842419 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abec2c46-a984-4314-88c5-d50d20ef7f8d" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842436 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="abec2c46-a984-4314-88c5-d50d20ef7f8d" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842454 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="proxy-httpd" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842461 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="proxy-httpd" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842469 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-notification-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842476 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-notification-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842489 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842497 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842507 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0c5054-9597-4b94-a1d6-1f424c1d6de4" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842514 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0c5054-9597-4b94-a1d6-1f424c1d6de4" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842525 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb76b95-4c2d-478d-b9d9-e6e182859ccd" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842533 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb76b95-4c2d-478d-b9d9-e6e182859ccd" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842559 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="sg-core" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842567 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="sg-core" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842585 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-central-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842592 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-central-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842599 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842607 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: E0130 22:04:01.842630 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd648327-e40d-4f17-9366-1773fa95f47a" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842638 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd648327-e40d-4f17-9366-1773fa95f47a" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842829 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0c5054-9597-4b94-a1d6-1f424c1d6de4" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842842 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="abec2c46-a984-4314-88c5-d50d20ef7f8d" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842850 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="sg-core" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842867 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb76b95-4c2d-478d-b9d9-e6e182859ccd" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842879 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-notification-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842885 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842892 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" containerName="mariadb-database-create" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842902 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd648327-e40d-4f17-9366-1773fa95f47a" containerName="mariadb-account-create-update" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842912 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="ceilometer-central-agent" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.842922 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a73a79-d17b-4370-a554-acccc33344ba" containerName="proxy-httpd" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.844842 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.847700 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.848089 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:04:01 crc kubenswrapper[4979]: I0130 22:04:01.898735 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.011841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.011929 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.012111 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.012184 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.012224 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl9n8\" (UniqueName: \"kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.012296 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.012353 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.113744 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.113838 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.113885 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.113907 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.113971 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.114015 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.114054 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl9n8\" (UniqueName: \"kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.115953 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.116339 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.120542 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.122235 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.124478 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.134171 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.139444 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl9n8\" (UniqueName: \"kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8\") pod \"ceilometer-0\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.191802 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.342135 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:02 crc kubenswrapper[4979]: I0130 22:04:02.681471 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:02 crc kubenswrapper[4979]: W0130 22:04:02.684206 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29bcacff_6888_44ea_aea7_79eeedfd2e5c.slice/crio-25fbd41db99e9fe0a09f07c2410171bc9100cfd406357aa975ddcb5f1c5be19b WatchSource:0}: Error finding container 25fbd41db99e9fe0a09f07c2410171bc9100cfd406357aa975ddcb5f1c5be19b: Status 404 returned error can't find the container with id 25fbd41db99e9fe0a09f07c2410171bc9100cfd406357aa975ddcb5f1c5be19b Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.082072 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91a73a79-d17b-4370-a554-acccc33344ba" path="/var/lib/kubelet/pods/91a73a79-d17b-4370-a554-acccc33344ba/volumes" Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.497234 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerStarted","Data":"25fbd41db99e9fe0a09f07c2410171bc9100cfd406357aa975ddcb5f1c5be19b"} Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.549533 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.549595 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.591051 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 22:04:03 crc kubenswrapper[4979]: I0130 22:04:03.609821 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 22:04:04 crc kubenswrapper[4979]: I0130 22:04:04.519857 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerStarted","Data":"72bfac39271d57208e4cda3468ff5baccf6b1245e4b758e434adb5dd707842e2"} Jan 30 22:04:04 crc kubenswrapper[4979]: I0130 22:04:04.520283 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 22:04:04 crc kubenswrapper[4979]: I0130 22:04:04.520298 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerStarted","Data":"94bba789930aa5a8fdbeb17ece25d6e06e37c5349f248886c3fad60a18712d23"} Jan 30 22:04:04 crc kubenswrapper[4979]: I0130 22:04:04.520315 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 22:04:05 crc kubenswrapper[4979]: I0130 22:04:05.532222 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerStarted","Data":"75c7338e97a58ddf4ddd727df9ef55106db41079c528545a6fbf69380cb74b87"} Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.350580 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cbmzn"] Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.352802 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.360945 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.372819 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cbmzn"] Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.373196 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-q2lvs" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.373652 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.418690 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.418801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.418839 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwtbv\" (UniqueName: \"kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.418861 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.521632 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.521727 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.521772 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwtbv\" (UniqueName: \"kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.521803 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.530275 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.530302 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.530875 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.541227 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.541261 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.546428 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwtbv\" (UniqueName: \"kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv\") pod \"nova-cell0-conductor-db-sync-cbmzn\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:06 crc kubenswrapper[4979]: I0130 22:04:06.673562 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.050931 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.125071 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.328809 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cbmzn"] Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.553798 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerStarted","Data":"3edabfebe35a16b52c89d040072c407d09ef108efd34683f83838647ad8307c0"} Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.554374 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.553999 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="sg-core" containerID="cri-o://75c7338e97a58ddf4ddd727df9ef55106db41079c528545a6fbf69380cb74b87" gracePeriod=30 Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.553964 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="proxy-httpd" containerID="cri-o://3edabfebe35a16b52c89d040072c407d09ef108efd34683f83838647ad8307c0" gracePeriod=30 Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.554023 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-notification-agent" containerID="cri-o://72bfac39271d57208e4cda3468ff5baccf6b1245e4b758e434adb5dd707842e2" gracePeriod=30 Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.554326 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-central-agent" containerID="cri-o://94bba789930aa5a8fdbeb17ece25d6e06e37c5349f248886c3fad60a18712d23" gracePeriod=30 Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.556013 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" event={"ID":"170f93fa-8e66-4ae0-ab49-b2db51c1afa5","Type":"ContainerStarted","Data":"fe92592e0879b96bdce141b30cbe05a1c3b99dc1723f96ea5d0aefbbdc1a1b6d"} Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.581331 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.06106256 podStartE2EDuration="6.581309588s" podCreationTimestamp="2026-01-30 22:04:01 +0000 UTC" firstStartedPulling="2026-01-30 22:04:02.687153535 +0000 UTC m=+1438.648400568" lastFinishedPulling="2026-01-30 22:04:07.207400563 +0000 UTC m=+1443.168647596" observedRunningTime="2026-01-30 22:04:07.573875618 +0000 UTC m=+1443.535122651" watchObservedRunningTime="2026-01-30 22:04:07.581309588 +0000 UTC m=+1443.542556621" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.802688 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.802814 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.839552 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:07 crc kubenswrapper[4979]: I0130 22:04:07.848869 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.567464 4979 generic.go:334] "Generic (PLEG): container finished" podID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerID="75c7338e97a58ddf4ddd727df9ef55106db41079c528545a6fbf69380cb74b87" exitCode=2 Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.567919 4979 generic.go:334] "Generic (PLEG): container finished" podID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerID="72bfac39271d57208e4cda3468ff5baccf6b1245e4b758e434adb5dd707842e2" exitCode=0 Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.567533 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerDied","Data":"75c7338e97a58ddf4ddd727df9ef55106db41079c528545a6fbf69380cb74b87"} Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.567995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerDied","Data":"72bfac39271d57208e4cda3468ff5baccf6b1245e4b758e434adb5dd707842e2"} Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.569002 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:08 crc kubenswrapper[4979]: I0130 22:04:08.569059 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:10 crc kubenswrapper[4979]: I0130 22:04:10.703551 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:10 crc kubenswrapper[4979]: I0130 22:04:10.703680 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 22:04:10 crc kubenswrapper[4979]: I0130 22:04:10.989817 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 22:04:11 crc kubenswrapper[4979]: I0130 22:04:11.618707 4979 generic.go:334] "Generic (PLEG): container finished" podID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerID="94bba789930aa5a8fdbeb17ece25d6e06e37c5349f248886c3fad60a18712d23" exitCode=0 Jan 30 22:04:11 crc kubenswrapper[4979]: I0130 22:04:11.618942 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerDied","Data":"94bba789930aa5a8fdbeb17ece25d6e06e37c5349f248886c3fad60a18712d23"} Jan 30 22:04:18 crc kubenswrapper[4979]: I0130 22:04:18.709604 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" event={"ID":"170f93fa-8e66-4ae0-ab49-b2db51c1afa5","Type":"ContainerStarted","Data":"ba2e39cff92291b5bd37681d66a67ae8cdc39f314eafc2ca6a8f88001981f1b9"} Jan 30 22:04:18 crc kubenswrapper[4979]: I0130 22:04:18.733450 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" podStartSLOduration=1.971037178 podStartE2EDuration="12.73341623s" podCreationTimestamp="2026-01-30 22:04:06 +0000 UTC" firstStartedPulling="2026-01-30 22:04:07.332411244 +0000 UTC m=+1443.293658277" lastFinishedPulling="2026-01-30 22:04:18.094790306 +0000 UTC m=+1454.056037329" observedRunningTime="2026-01-30 22:04:18.723102963 +0000 UTC m=+1454.684349996" watchObservedRunningTime="2026-01-30 22:04:18.73341623 +0000 UTC m=+1454.694663263" Jan 30 22:04:30 crc kubenswrapper[4979]: I0130 22:04:30.848926 4979 generic.go:334] "Generic (PLEG): container finished" podID="170f93fa-8e66-4ae0-ab49-b2db51c1afa5" containerID="ba2e39cff92291b5bd37681d66a67ae8cdc39f314eafc2ca6a8f88001981f1b9" exitCode=0 Jan 30 22:04:30 crc kubenswrapper[4979]: I0130 22:04:30.849057 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" event={"ID":"170f93fa-8e66-4ae0-ab49-b2db51c1afa5","Type":"ContainerDied","Data":"ba2e39cff92291b5bd37681d66a67ae8cdc39f314eafc2ca6a8f88001981f1b9"} Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.198928 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.239161 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.368165 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwtbv\" (UniqueName: \"kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv\") pod \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.368426 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts\") pod \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.369493 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data\") pod \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.369551 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle\") pod \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\" (UID: \"170f93fa-8e66-4ae0-ab49-b2db51c1afa5\") " Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.376629 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv" (OuterVolumeSpecName: "kube-api-access-jwtbv") pod "170f93fa-8e66-4ae0-ab49-b2db51c1afa5" (UID: "170f93fa-8e66-4ae0-ab49-b2db51c1afa5"). InnerVolumeSpecName "kube-api-access-jwtbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.377433 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts" (OuterVolumeSpecName: "scripts") pod "170f93fa-8e66-4ae0-ab49-b2db51c1afa5" (UID: "170f93fa-8e66-4ae0-ab49-b2db51c1afa5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.403931 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "170f93fa-8e66-4ae0-ab49-b2db51c1afa5" (UID: "170f93fa-8e66-4ae0-ab49-b2db51c1afa5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.408132 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data" (OuterVolumeSpecName: "config-data") pod "170f93fa-8e66-4ae0-ab49-b2db51c1afa5" (UID: "170f93fa-8e66-4ae0-ab49-b2db51c1afa5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.500938 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwtbv\" (UniqueName: \"kubernetes.io/projected/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-kube-api-access-jwtbv\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.501026 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.501071 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.501090 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f93fa-8e66-4ae0-ab49-b2db51c1afa5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.873130 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" event={"ID":"170f93fa-8e66-4ae0-ab49-b2db51c1afa5","Type":"ContainerDied","Data":"fe92592e0879b96bdce141b30cbe05a1c3b99dc1723f96ea5d0aefbbdc1a1b6d"} Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.873180 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe92592e0879b96bdce141b30cbe05a1c3b99dc1723f96ea5d0aefbbdc1a1b6d" Jan 30 22:04:32 crc kubenswrapper[4979]: I0130 22:04:32.873295 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cbmzn" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.016935 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:04:33 crc kubenswrapper[4979]: E0130 22:04:33.017575 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170f93fa-8e66-4ae0-ab49-b2db51c1afa5" containerName="nova-cell0-conductor-db-sync" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.017602 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="170f93fa-8e66-4ae0-ab49-b2db51c1afa5" containerName="nova-cell0-conductor-db-sync" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.017859 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="170f93fa-8e66-4ae0-ab49-b2db51c1afa5" containerName="nova-cell0-conductor-db-sync" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.018731 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.021592 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-q2lvs" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.022979 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.034618 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.115973 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4sfk\" (UniqueName: \"kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.116507 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.116545 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.218421 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4sfk\" (UniqueName: \"kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.218498 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.218533 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.227672 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.228086 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.245607 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4sfk\" (UniqueName: \"kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk\") pod \"nova-cell0-conductor-0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.338417 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.818667 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:04:33 crc kubenswrapper[4979]: I0130 22:04:33.886454 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c04339fa-9eb7-4671-895b-ef768888add0","Type":"ContainerStarted","Data":"9644ea1b50d881a5fc87efbeb25d5fe3195c9de5bf0f6fd1b1d5b2e65c2a5124"} Jan 30 22:04:34 crc kubenswrapper[4979]: I0130 22:04:34.900378 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c04339fa-9eb7-4671-895b-ef768888add0","Type":"ContainerStarted","Data":"383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb"} Jan 30 22:04:34 crc kubenswrapper[4979]: I0130 22:04:34.901080 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:34 crc kubenswrapper[4979]: I0130 22:04:34.926536 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.9264945620000002 podStartE2EDuration="2.926494562s" podCreationTimestamp="2026-01-30 22:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:34.921869458 +0000 UTC m=+1470.883116521" watchObservedRunningTime="2026-01-30 22:04:34.926494562 +0000 UTC m=+1470.887741595" Jan 30 22:04:37 crc kubenswrapper[4979]: I0130 22:04:37.937940 4979 generic.go:334] "Generic (PLEG): container finished" podID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerID="3edabfebe35a16b52c89d040072c407d09ef108efd34683f83838647ad8307c0" exitCode=137 Jan 30 22:04:37 crc kubenswrapper[4979]: I0130 22:04:37.938775 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerDied","Data":"3edabfebe35a16b52c89d040072c407d09ef108efd34683f83838647ad8307c0"} Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.569428 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657580 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657695 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657727 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl9n8\" (UniqueName: \"kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657808 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657897 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.657970 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.658001 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd\") pod \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\" (UID: \"29bcacff-6888-44ea-aea7-79eeedfd2e5c\") " Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.659101 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.659270 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.666303 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts" (OuterVolumeSpecName: "scripts") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.667080 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8" (OuterVolumeSpecName: "kube-api-access-dl9n8") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "kube-api-access-dl9n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.697080 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.737943 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.760709 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl9n8\" (UniqueName: \"kubernetes.io/projected/29bcacff-6888-44ea-aea7-79eeedfd2e5c-kube-api-access-dl9n8\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.760931 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.761060 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.761154 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.761250 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29bcacff-6888-44ea-aea7-79eeedfd2e5c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.761324 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.775850 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data" (OuterVolumeSpecName: "config-data") pod "29bcacff-6888-44ea-aea7-79eeedfd2e5c" (UID: "29bcacff-6888-44ea-aea7-79eeedfd2e5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.864150 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bcacff-6888-44ea-aea7-79eeedfd2e5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.952108 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29bcacff-6888-44ea-aea7-79eeedfd2e5c","Type":"ContainerDied","Data":"25fbd41db99e9fe0a09f07c2410171bc9100cfd406357aa975ddcb5f1c5be19b"} Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.952241 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.953408 4979 scope.go:117] "RemoveContainer" containerID="3edabfebe35a16b52c89d040072c407d09ef108efd34683f83838647ad8307c0" Jan 30 22:04:38 crc kubenswrapper[4979]: I0130 22:04:38.982135 4979 scope.go:117] "RemoveContainer" containerID="75c7338e97a58ddf4ddd727df9ef55106db41079c528545a6fbf69380cb74b87" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.005800 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.014677 4979 scope.go:117] "RemoveContainer" containerID="72bfac39271d57208e4cda3468ff5baccf6b1245e4b758e434adb5dd707842e2" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.062383 4979 scope.go:117] "RemoveContainer" containerID="94bba789930aa5a8fdbeb17ece25d6e06e37c5349f248886c3fad60a18712d23" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.088724 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.088780 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:39 crc kubenswrapper[4979]: E0130 22:04:39.089231 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="sg-core" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089249 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="sg-core" Jan 30 22:04:39 crc kubenswrapper[4979]: E0130 22:04:39.089282 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="proxy-httpd" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089309 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="proxy-httpd" Jan 30 22:04:39 crc kubenswrapper[4979]: E0130 22:04:39.089324 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-central-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089331 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-central-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: E0130 22:04:39.089356 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-notification-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089383 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-notification-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089639 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="proxy-httpd" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089660 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="sg-core" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089671 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-notification-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.089681 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" containerName="ceilometer-central-agent" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.092100 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.094879 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.095150 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.102292 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.170847 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171197 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171409 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171498 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171544 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171593 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd572\" (UniqueName: \"kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.171667 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274117 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274193 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274231 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd572\" (UniqueName: \"kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274262 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274287 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274350 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.274442 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.275008 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.275354 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.281301 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.282712 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.284710 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.295300 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.306395 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd572\" (UniqueName: \"kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572\") pod \"ceilometer-0\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.415951 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.870737 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:04:39 crc kubenswrapper[4979]: I0130 22:04:39.964855 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerStarted","Data":"01c6464a8f040a12abc8ff599cbfa55d11072c8f6eee4cfc9c902ea1c0c52c3a"} Jan 30 22:04:40 crc kubenswrapper[4979]: I0130 22:04:40.978250 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerStarted","Data":"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428"} Jan 30 22:04:41 crc kubenswrapper[4979]: I0130 22:04:41.083052 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29bcacff-6888-44ea-aea7-79eeedfd2e5c" path="/var/lib/kubelet/pods/29bcacff-6888-44ea-aea7-79eeedfd2e5c/volumes" Jan 30 22:04:42 crc kubenswrapper[4979]: I0130 22:04:42.020371 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerStarted","Data":"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d"} Jan 30 22:04:43 crc kubenswrapper[4979]: I0130 22:04:43.041132 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerStarted","Data":"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393"} Jan 30 22:04:43 crc kubenswrapper[4979]: I0130 22:04:43.375718 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.181829 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pqfg4"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.184069 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.190722 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.191176 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.202987 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pqfg4"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.289563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.289685 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.289767 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.289799 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tw8w\" (UniqueName: \"kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.392460 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.392603 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.392716 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.392750 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tw8w\" (UniqueName: \"kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.401615 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.402234 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.432741 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.438511 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tw8w\" (UniqueName: \"kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w\") pod \"nova-cell0-cell-mapping-pqfg4\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.461220 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.470564 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.474437 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.476814 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.478479 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.484104 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.491490 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.507259 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.508800 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.511001 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.511595 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.522723 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.534784 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599586 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599656 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599701 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jnkc\" (UniqueName: \"kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599794 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwbx6\" (UniqueName: \"kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599812 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2grt\" (UniqueName: \"kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599874 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599896 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599924 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.599959 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.670183 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.692810 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.705408 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706018 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706150 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706163 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706282 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706355 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706428 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706482 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jnkc\" (UniqueName: \"kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706786 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwbx6\" (UniqueName: \"kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706832 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2grt\" (UniqueName: \"kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.706908 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.714006 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.735480 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.746498 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.760787 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.761892 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.766640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jnkc\" (UniqueName: \"kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.767484 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.768958 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.774168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.782461 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2grt\" (UniqueName: \"kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt\") pod \"nova-cell1-novncproxy-0\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.784372 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwbx6\" (UniqueName: \"kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6\") pod \"nova-api-0\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.825928 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.826255 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfr9w\" (UniqueName: \"kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.826539 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.826570 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.826860 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.841332 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.841965 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.845326 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.857057 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929345 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929472 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929552 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929599 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929730 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929824 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xswj2\" (UniqueName: \"kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929870 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929892 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.929921 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfr9w\" (UniqueName: \"kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.931627 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.934686 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.937497 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.940239 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:44.953624 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfr9w\" (UniqueName: \"kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w\") pod \"nova-metadata-0\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.033791 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.033873 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.033940 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.033972 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.033995 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xswj2\" (UniqueName: \"kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.034039 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.035186 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.035786 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.037006 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.037966 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.050384 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.070246 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xswj2\" (UniqueName: \"kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2\") pod \"dnsmasq-dns-757b4f8459-6p7nr\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.117719 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.188986 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.328266 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gfv78"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.329686 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.340139 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.340464 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.345077 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gfv78"] Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.446768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.447407 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.447466 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk2lm\" (UniqueName: \"kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.447571 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.550656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.550728 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.550784 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk2lm\" (UniqueName: \"kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.550825 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.560349 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.560553 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.561069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.586289 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk2lm\" (UniqueName: \"kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm\") pod \"nova-cell1-conductor-db-sync-gfv78\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:46 crc kubenswrapper[4979]: I0130 22:04:45.650887 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.054252 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.099160 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pqfg4"] Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.108859 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50eca4bc-cd69-4cce-a995-ac34fbcd5edd","Type":"ContainerStarted","Data":"418ba1031b4d4e3f1080f6d157787ad41890fff919486dd4b096f4ae99738787"} Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.114117 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.116130 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pqfg4" event={"ID":"15e523da-837e-4af0-835b-55b1950fc487","Type":"ContainerStarted","Data":"f8e2db0bc8aced80b6b6b46a1c0ed2401ba1be3a5bf03e9af3531ffe48935419"} Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.124002 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gfv78"] Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.136381 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:04:47 crc kubenswrapper[4979]: W0130 22:04:47.148696 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfcf14a9_0e1b_4d80_9a4f_124eb0297975.slice/crio-2ef3709c456fed3d68ff1473de1e7aa592a0aa52c3ecb4cb4ed939ed96223baf WatchSource:0}: Error finding container 2ef3709c456fed3d68ff1473de1e7aa592a0aa52c3ecb4cb4ed939ed96223baf: Status 404 returned error can't find the container with id 2ef3709c456fed3d68ff1473de1e7aa592a0aa52c3ecb4cb4ed939ed96223baf Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.149128 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:04:47 crc kubenswrapper[4979]: I0130 22:04:47.222022 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.165433 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gfv78" event={"ID":"181d93b8-d7d4-4184-beb4-f4e96f221af5","Type":"ContainerStarted","Data":"03fcd58bcede39bf0ce2578dd97f75b5dfefffae36f69c196076f3970b1d584e"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.166422 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gfv78" event={"ID":"181d93b8-d7d4-4184-beb4-f4e96f221af5","Type":"ContainerStarted","Data":"8439fa81627ed0d7327a33566a06586c473b5bde902c6eee485f6d5ed225dc1e"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.199586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerStarted","Data":"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.201778 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.207759 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-gfv78" podStartSLOduration=3.207722071 podStartE2EDuration="3.207722071s" podCreationTimestamp="2026-01-30 22:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:48.19163266 +0000 UTC m=+1484.152879713" watchObservedRunningTime="2026-01-30 22:04:48.207722071 +0000 UTC m=+1484.168969104" Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.216645 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerStarted","Data":"2ef3709c456fed3d68ff1473de1e7aa592a0aa52c3ecb4cb4ed939ed96223baf"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.228553 4979 generic.go:334] "Generic (PLEG): container finished" podID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerID="013d174f6848cb2abad2b004411d67e5b0bf2bc2e07bdd6263bb0777501bbd65" exitCode=0 Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.228638 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" event={"ID":"c5be09bc-3cf9-443f-bfc7-904e8ed874f8","Type":"ContainerDied","Data":"013d174f6848cb2abad2b004411d67e5b0bf2bc2e07bdd6263bb0777501bbd65"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.228675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" event={"ID":"c5be09bc-3cf9-443f-bfc7-904e8ed874f8","Type":"ContainerStarted","Data":"dad5ecae947304a11e938cd18a6af2bcf48628237b04604b4febaa6b29c4e97a"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.232939 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerStarted","Data":"6e35d2aa80751cf03f328d82e8a9f5b326aa2261f7ea467cd2e725d52fe418c8"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.308075 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.390299609 podStartE2EDuration="9.308046741s" podCreationTimestamp="2026-01-30 22:04:39 +0000 UTC" firstStartedPulling="2026-01-30 22:04:39.884138858 +0000 UTC m=+1475.845385891" lastFinishedPulling="2026-01-30 22:04:47.80188599 +0000 UTC m=+1483.763133023" observedRunningTime="2026-01-30 22:04:48.270698439 +0000 UTC m=+1484.231945562" watchObservedRunningTime="2026-01-30 22:04:48.308046741 +0000 UTC m=+1484.269293774" Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.317066 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"62853806-2bda-4664-b5e7-cc1dc951f658","Type":"ContainerStarted","Data":"34e4ecb3720c18b8e6cdeb76dc056d9810bc42be63e75ebe0aec06bf1bbc4605"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.320063 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pqfg4" event={"ID":"15e523da-837e-4af0-835b-55b1950fc487","Type":"ContainerStarted","Data":"5b8c31638b5486835421778350c31d34ef94715ad8979849599bdf9ef248f6ef"} Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.955829 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pqfg4" podStartSLOduration=4.955792589 podStartE2EDuration="4.955792589s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:48.366975241 +0000 UTC m=+1484.328222264" watchObservedRunningTime="2026-01-30 22:04:48.955792589 +0000 UTC m=+1484.917039622" Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.962388 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:48 crc kubenswrapper[4979]: I0130 22:04:48.970261 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:04:49 crc kubenswrapper[4979]: I0130 22:04:49.375704 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" event={"ID":"c5be09bc-3cf9-443f-bfc7-904e8ed874f8","Type":"ContainerStarted","Data":"b7bcfd864469b2db27c19576fcc10b62425238ba1c0620d37863dcb933d25457"} Jan 30 22:04:49 crc kubenswrapper[4979]: I0130 22:04:49.406771 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" podStartSLOduration=5.406742049 podStartE2EDuration="5.406742049s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:04:49.399150495 +0000 UTC m=+1485.360397528" watchObservedRunningTime="2026-01-30 22:04:49.406742049 +0000 UTC m=+1485.367989082" Jan 30 22:04:50 crc kubenswrapper[4979]: I0130 22:04:50.189992 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.190332 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.269612 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.269908 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="dnsmasq-dns" containerID="cri-o://cde1d8ef9853814ac0538e668f22acd209e1123ba255255d91b5dde006032de3" gracePeriod=10 Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.441537 4979 generic.go:334] "Generic (PLEG): container finished" podID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerID="cde1d8ef9853814ac0538e668f22acd209e1123ba255255d91b5dde006032de3" exitCode=0 Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.441601 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerDied","Data":"cde1d8ef9853814ac0538e668f22acd209e1123ba255255d91b5dde006032de3"} Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.830114 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937461 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937550 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937655 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937759 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937872 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.937996 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqtbq\" (UniqueName: \"kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq\") pod \"058e90a8-7816-4982-96eb-0390f9f09ef5\" (UID: \"058e90a8-7816-4982-96eb-0390f9f09ef5\") " Jan 30 22:04:55 crc kubenswrapper[4979]: I0130 22:04:55.971917 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq" (OuterVolumeSpecName: "kube-api-access-sqtbq") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "kube-api-access-sqtbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.044931 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqtbq\" (UniqueName: \"kubernetes.io/projected/058e90a8-7816-4982-96eb-0390f9f09ef5-kube-api-access-sqtbq\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.101362 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.104971 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.112382 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config" (OuterVolumeSpecName: "config") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.112779 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.113432 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "058e90a8-7816-4982-96eb-0390f9f09ef5" (UID: "058e90a8-7816-4982-96eb-0390f9f09ef5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.146934 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.146970 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.146983 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.146991 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.146999 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/058e90a8-7816-4982-96eb-0390f9f09ef5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.461506 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerStarted","Data":"ecfb14e719180563265ba5e760a73d7e28c05ff3af344419909ff52f4fdb9e55"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.463346 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50eca4bc-cd69-4cce-a995-ac34fbcd5edd","Type":"ContainerStarted","Data":"4bd5fadc7d49f6d0917b463f6bb16e126a837db96ddad54cd74e72ea4b07d33a"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.463443 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4bd5fadc7d49f6d0917b463f6bb16e126a837db96ddad54cd74e72ea4b07d33a" gracePeriod=30 Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.471394 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" event={"ID":"058e90a8-7816-4982-96eb-0390f9f09ef5","Type":"ContainerDied","Data":"bd0c08ab5da0f9972ab0ecfaa7d4a96b3e692f626faf2e99b754b19a6fd17552"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.471874 4979 scope.go:117] "RemoveContainer" containerID="cde1d8ef9853814ac0538e668f22acd209e1123ba255255d91b5dde006032de3" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.471730 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nph2b" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.480993 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerStarted","Data":"697a94299886e8994ee2d34c9b0e4c88fb90d75ed35c441a317ac901a551c738"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.481070 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerStarted","Data":"c658069cee73c9bc8e5ac764b88cf596ffd81c0d3853577f7c9348be326c8804"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.484196 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"62853806-2bda-4664-b5e7-cc1dc951f658","Type":"ContainerStarted","Data":"fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d"} Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.500268 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.01186695 podStartE2EDuration="12.500240652s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="2026-01-30 22:04:47.080443256 +0000 UTC m=+1483.041690329" lastFinishedPulling="2026-01-30 22:04:55.568816998 +0000 UTC m=+1491.530064031" observedRunningTime="2026-01-30 22:04:56.491482577 +0000 UTC m=+1492.452729620" watchObservedRunningTime="2026-01-30 22:04:56.500240652 +0000 UTC m=+1492.461487695" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.526968 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.988133242 podStartE2EDuration="12.526940527s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="2026-01-30 22:04:47.148555742 +0000 UTC m=+1483.109802775" lastFinishedPulling="2026-01-30 22:04:55.687363027 +0000 UTC m=+1491.648610060" observedRunningTime="2026-01-30 22:04:56.511284028 +0000 UTC m=+1492.472531061" watchObservedRunningTime="2026-01-30 22:04:56.526940527 +0000 UTC m=+1492.488187560" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.544907 4979 scope.go:117] "RemoveContainer" containerID="d466d90f2d37f6a5ffe695492f5a86148cdb526bdbc83ccf9934c5bdbb75a655" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.547881 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.128573749 podStartE2EDuration="12.547854268s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="2026-01-30 22:04:47.145173932 +0000 UTC m=+1483.106420965" lastFinishedPulling="2026-01-30 22:04:55.564454461 +0000 UTC m=+1491.525701484" observedRunningTime="2026-01-30 22:04:56.543089991 +0000 UTC m=+1492.504337024" watchObservedRunningTime="2026-01-30 22:04:56.547854268 +0000 UTC m=+1492.509101301" Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.577479 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:04:56 crc kubenswrapper[4979]: I0130 22:04:56.585829 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nph2b"] Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.081492 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" path="/var/lib/kubelet/pods/058e90a8-7816-4982-96eb-0390f9f09ef5/volumes" Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.498945 4979 generic.go:334] "Generic (PLEG): container finished" podID="15e523da-837e-4af0-835b-55b1950fc487" containerID="5b8c31638b5486835421778350c31d34ef94715ad8979849599bdf9ef248f6ef" exitCode=0 Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.499020 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pqfg4" event={"ID":"15e523da-837e-4af0-835b-55b1950fc487","Type":"ContainerDied","Data":"5b8c31638b5486835421778350c31d34ef94715ad8979849599bdf9ef248f6ef"} Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.502155 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerStarted","Data":"b2b7325582ff9647ea175d6bfac0463d3ad25165a5b1a3f5fb440e39f198a42c"} Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.502296 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-log" containerID="cri-o://ecfb14e719180563265ba5e760a73d7e28c05ff3af344419909ff52f4fdb9e55" gracePeriod=30 Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.502435 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-metadata" containerID="cri-o://b2b7325582ff9647ea175d6bfac0463d3ad25165a5b1a3f5fb440e39f198a42c" gracePeriod=30 Jan 30 22:04:57 crc kubenswrapper[4979]: I0130 22:04:57.554353 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.060475936 podStartE2EDuration="13.554311834s" podCreationTimestamp="2026-01-30 22:04:44 +0000 UTC" firstStartedPulling="2026-01-30 22:04:47.188496084 +0000 UTC m=+1483.149743117" lastFinishedPulling="2026-01-30 22:04:55.682331982 +0000 UTC m=+1491.643579015" observedRunningTime="2026-01-30 22:04:57.548119448 +0000 UTC m=+1493.509366481" watchObservedRunningTime="2026-01-30 22:04:57.554311834 +0000 UTC m=+1493.515558867" Jan 30 22:04:58 crc kubenswrapper[4979]: I0130 22:04:58.522872 4979 generic.go:334] "Generic (PLEG): container finished" podID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerID="b2b7325582ff9647ea175d6bfac0463d3ad25165a5b1a3f5fb440e39f198a42c" exitCode=0 Jan 30 22:04:58 crc kubenswrapper[4979]: I0130 22:04:58.523501 4979 generic.go:334] "Generic (PLEG): container finished" podID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerID="ecfb14e719180563265ba5e760a73d7e28c05ff3af344419909ff52f4fdb9e55" exitCode=143 Jan 30 22:04:58 crc kubenswrapper[4979]: I0130 22:04:58.523453 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerDied","Data":"b2b7325582ff9647ea175d6bfac0463d3ad25165a5b1a3f5fb440e39f198a42c"} Jan 30 22:04:58 crc kubenswrapper[4979]: I0130 22:04:58.523663 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerDied","Data":"ecfb14e719180563265ba5e760a73d7e28c05ff3af344419909ff52f4fdb9e55"} Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.002048 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.116953 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle\") pod \"15e523da-837e-4af0-835b-55b1950fc487\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.117112 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts\") pod \"15e523da-837e-4af0-835b-55b1950fc487\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.117178 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data\") pod \"15e523da-837e-4af0-835b-55b1950fc487\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.117390 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tw8w\" (UniqueName: \"kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w\") pod \"15e523da-837e-4af0-835b-55b1950fc487\" (UID: \"15e523da-837e-4af0-835b-55b1950fc487\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.126309 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts" (OuterVolumeSpecName: "scripts") pod "15e523da-837e-4af0-835b-55b1950fc487" (UID: "15e523da-837e-4af0-835b-55b1950fc487"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.129232 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w" (OuterVolumeSpecName: "kube-api-access-9tw8w") pod "15e523da-837e-4af0-835b-55b1950fc487" (UID: "15e523da-837e-4af0-835b-55b1950fc487"). InnerVolumeSpecName "kube-api-access-9tw8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.150156 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data" (OuterVolumeSpecName: "config-data") pod "15e523da-837e-4af0-835b-55b1950fc487" (UID: "15e523da-837e-4af0-835b-55b1950fc487"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.151398 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15e523da-837e-4af0-835b-55b1950fc487" (UID: "15e523da-837e-4af0-835b-55b1950fc487"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.219959 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tw8w\" (UniqueName: \"kubernetes.io/projected/15e523da-837e-4af0-835b-55b1950fc487-kube-api-access-9tw8w\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.220014 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.220049 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.220072 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e523da-837e-4af0-835b-55b1950fc487-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.363380 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.424008 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data\") pod \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.424425 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle\") pod \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.424471 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs\") pod \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.424536 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfr9w\" (UniqueName: \"kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w\") pod \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\" (UID: \"bfcf14a9-0e1b-4d80-9a4f-124eb0297975\") " Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.425148 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs" (OuterVolumeSpecName: "logs") pod "bfcf14a9-0e1b-4d80-9a4f-124eb0297975" (UID: "bfcf14a9-0e1b-4d80-9a4f-124eb0297975"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.429356 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w" (OuterVolumeSpecName: "kube-api-access-sfr9w") pod "bfcf14a9-0e1b-4d80-9a4f-124eb0297975" (UID: "bfcf14a9-0e1b-4d80-9a4f-124eb0297975"). InnerVolumeSpecName "kube-api-access-sfr9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.451849 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfcf14a9-0e1b-4d80-9a4f-124eb0297975" (UID: "bfcf14a9-0e1b-4d80-9a4f-124eb0297975"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.452885 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data" (OuterVolumeSpecName: "config-data") pod "bfcf14a9-0e1b-4d80-9a4f-124eb0297975" (UID: "bfcf14a9-0e1b-4d80-9a4f-124eb0297975"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.526875 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.526926 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.526940 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.526949 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfr9w\" (UniqueName: \"kubernetes.io/projected/bfcf14a9-0e1b-4d80-9a4f-124eb0297975-kube-api-access-sfr9w\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.533187 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bfcf14a9-0e1b-4d80-9a4f-124eb0297975","Type":"ContainerDied","Data":"2ef3709c456fed3d68ff1473de1e7aa592a0aa52c3ecb4cb4ed939ed96223baf"} Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.533262 4979 scope.go:117] "RemoveContainer" containerID="b2b7325582ff9647ea175d6bfac0463d3ad25165a5b1a3f5fb440e39f198a42c" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.533431 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.543026 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pqfg4" event={"ID":"15e523da-837e-4af0-835b-55b1950fc487","Type":"ContainerDied","Data":"f8e2db0bc8aced80b6b6b46a1c0ed2401ba1be3a5bf03e9af3531ffe48935419"} Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.543138 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e2db0bc8aced80b6b6b46a1c0ed2401ba1be3a5bf03e9af3531ffe48935419" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.543209 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pqfg4" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.577093 4979 scope.go:117] "RemoveContainer" containerID="ecfb14e719180563265ba5e760a73d7e28c05ff3af344419909ff52f4fdb9e55" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.585988 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.600178 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.614506 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.614976 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e523da-837e-4af0-835b-55b1950fc487" containerName="nova-manage" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.614996 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e523da-837e-4af0-835b-55b1950fc487" containerName="nova-manage" Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.615016 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="dnsmasq-dns" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615022 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="dnsmasq-dns" Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.615056 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-metadata" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615063 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-metadata" Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.615074 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-log" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615080 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-log" Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.615094 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="init" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615100 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="init" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615292 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-log" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615311 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" containerName="nova-metadata-metadata" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615348 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e523da-837e-4af0-835b-55b1950fc487" containerName="nova-manage" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.615357 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="058e90a8-7816-4982-96eb-0390f9f09ef5" containerName="dnsmasq-dns" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.616538 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.620984 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.621265 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.625679 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.714571 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.715297 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-log" containerID="cri-o://c658069cee73c9bc8e5ac764b88cf596ffd81c0d3853577f7c9348be326c8804" gracePeriod=30 Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.715383 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-api" containerID="cri-o://697a94299886e8994ee2d34c9b0e4c88fb90d75ed35c441a317ac901a551c738" gracePeriod=30 Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.730245 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.730530 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="62853806-2bda-4664-b5e7-cc1dc951f658" containerName="nova-scheduler-scheduler" containerID="cri-o://fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d" gracePeriod=30 Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.733065 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.733370 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.733426 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.733517 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zljzj\" (UniqueName: \"kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.733746 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.802869 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:04:59 crc kubenswrapper[4979]: E0130 22:04:59.803862 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-zljzj logs nova-metadata-tls-certs], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/nova-metadata-0" podUID="3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.836258 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.836316 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.836371 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zljzj\" (UniqueName: \"kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.836416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.836477 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.837846 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.842923 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.842999 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.843230 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.843651 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.856950 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zljzj\" (UniqueName: \"kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj\") pod \"nova-metadata-0\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " pod="openstack/nova-metadata-0" Jan 30 22:04:59 crc kubenswrapper[4979]: I0130 22:04:59.936391 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.556086 4979 generic.go:334] "Generic (PLEG): container finished" podID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerID="697a94299886e8994ee2d34c9b0e4c88fb90d75ed35c441a317ac901a551c738" exitCode=0 Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.556133 4979 generic.go:334] "Generic (PLEG): container finished" podID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerID="c658069cee73c9bc8e5ac764b88cf596ffd81c0d3853577f7c9348be326c8804" exitCode=143 Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.556159 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerDied","Data":"697a94299886e8994ee2d34c9b0e4c88fb90d75ed35c441a317ac901a551c738"} Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.556213 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.556212 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerDied","Data":"c658069cee73c9bc8e5ac764b88cf596ffd81c0d3853577f7c9348be326c8804"} Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.567792 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.652983 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zljzj\" (UniqueName: \"kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj\") pod \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.653137 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs\") pod \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.653478 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data\") pod \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.653621 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs\") pod \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.653731 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle\") pod \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\" (UID: \"3c7c7aee-d4ea-4138-8bbc-72e985b3efc7\") " Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.654090 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs" (OuterVolumeSpecName: "logs") pod "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" (UID: "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.659787 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data" (OuterVolumeSpecName: "config-data") pod "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" (UID: "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.661287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" (UID: "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.662311 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj" (OuterVolumeSpecName: "kube-api-access-zljzj") pod "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" (UID: "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7"). InnerVolumeSpecName "kube-api-access-zljzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.677299 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" (UID: "3c7c7aee-d4ea-4138-8bbc-72e985b3efc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.756756 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.757337 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.757353 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.757367 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zljzj\" (UniqueName: \"kubernetes.io/projected/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-kube-api-access-zljzj\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:00 crc kubenswrapper[4979]: I0130 22:05:00.757384 4979 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.087644 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfcf14a9-0e1b-4d80-9a4f-124eb0297975" path="/var/lib/kubelet/pods/bfcf14a9-0e1b-4d80-9a4f-124eb0297975/volumes" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.143768 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.186883 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwbx6\" (UniqueName: \"kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6\") pod \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.187083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs\") pod \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.187123 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data\") pod \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.187356 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") pod \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.187778 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs" (OuterVolumeSpecName: "logs") pod "fe19f1e0-5b59-46b5-a88c-eb1600e144ca" (UID: "fe19f1e0-5b59-46b5-a88c-eb1600e144ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.188783 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.193587 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6" (OuterVolumeSpecName: "kube-api-access-mwbx6") pod "fe19f1e0-5b59-46b5-a88c-eb1600e144ca" (UID: "fe19f1e0-5b59-46b5-a88c-eb1600e144ca"). InnerVolumeSpecName "kube-api-access-mwbx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:01 crc kubenswrapper[4979]: E0130 22:05:01.215458 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle podName:fe19f1e0-5b59-46b5-a88c-eb1600e144ca nodeName:}" failed. No retries permitted until 2026-01-30 22:05:01.715413106 +0000 UTC m=+1497.676660159 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle") pod "fe19f1e0-5b59-46b5-a88c-eb1600e144ca" (UID: "fe19f1e0-5b59-46b5-a88c-eb1600e144ca") : error deleting /var/lib/kubelet/pods/fe19f1e0-5b59-46b5-a88c-eb1600e144ca/volume-subpaths: remove /var/lib/kubelet/pods/fe19f1e0-5b59-46b5-a88c-eb1600e144ca/volume-subpaths: no such file or directory Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.220008 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data" (OuterVolumeSpecName: "config-data") pod "fe19f1e0-5b59-46b5-a88c-eb1600e144ca" (UID: "fe19f1e0-5b59-46b5-a88c-eb1600e144ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.290235 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwbx6\" (UniqueName: \"kubernetes.io/projected/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-kube-api-access-mwbx6\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.290277 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:01 crc kubenswrapper[4979]: E0130 22:05:01.402840 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62853806_2bda_4664_b5e7_cc1dc951f658.slice/crio-conmon-fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.571297 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.571264 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe19f1e0-5b59-46b5-a88c-eb1600e144ca","Type":"ContainerDied","Data":"6e35d2aa80751cf03f328d82e8a9f5b326aa2261f7ea467cd2e725d52fe418c8"} Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.573259 4979 scope.go:117] "RemoveContainer" containerID="697a94299886e8994ee2d34c9b0e4c88fb90d75ed35c441a317ac901a551c738" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.573798 4979 generic.go:334] "Generic (PLEG): container finished" podID="62853806-2bda-4664-b5e7-cc1dc951f658" containerID="fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d" exitCode=0 Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.573919 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.574092 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"62853806-2bda-4664-b5e7-cc1dc951f658","Type":"ContainerDied","Data":"fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d"} Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.616496 4979 scope.go:117] "RemoveContainer" containerID="c658069cee73c9bc8e5ac764b88cf596ffd81c0d3853577f7c9348be326c8804" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.641578 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.668889 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.687567 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: E0130 22:05:01.688109 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-api" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.688132 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-api" Jan 30 22:05:01 crc kubenswrapper[4979]: E0130 22:05:01.688148 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-log" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.688155 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-log" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.688452 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-api" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.688487 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" containerName="nova-api-log" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.690240 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.693583 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.693684 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.699241 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.806662 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") pod \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\" (UID: \"fe19f1e0-5b59-46b5-a88c-eb1600e144ca\") " Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.807018 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.807092 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.807169 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.807221 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.807255 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.813736 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe19f1e0-5b59-46b5-a88c-eb1600e144ca" (UID: "fe19f1e0-5b59-46b5-a88c-eb1600e144ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913040 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913184 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913232 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913362 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913439 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.913508 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe19f1e0-5b59-46b5-a88c-eb1600e144ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.914126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.921308 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.921471 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.921695 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.944124 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.949402 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb\") pod \"nova-metadata-0\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " pod="openstack/nova-metadata-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.953800 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.967414 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.969991 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.973258 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 22:05:01 crc kubenswrapper[4979]: I0130 22:05:01.977245 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.015812 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.015911 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.015945 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.015997 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzgr\" (UniqueName: \"kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.026852 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.039590 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.039656 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.118496 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.119301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.119470 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.119710 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzgr\" (UniqueName: \"kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.120452 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.123705 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.124649 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.140415 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzgr\" (UniqueName: \"kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr\") pod \"nova-api-0\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.223121 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.290080 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.323075 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle\") pod \"62853806-2bda-4664-b5e7-cc1dc951f658\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.323263 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data\") pod \"62853806-2bda-4664-b5e7-cc1dc951f658\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.323453 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jnkc\" (UniqueName: \"kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc\") pod \"62853806-2bda-4664-b5e7-cc1dc951f658\" (UID: \"62853806-2bda-4664-b5e7-cc1dc951f658\") " Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.331361 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc" (OuterVolumeSpecName: "kube-api-access-9jnkc") pod "62853806-2bda-4664-b5e7-cc1dc951f658" (UID: "62853806-2bda-4664-b5e7-cc1dc951f658"). InnerVolumeSpecName "kube-api-access-9jnkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.356262 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data" (OuterVolumeSpecName: "config-data") pod "62853806-2bda-4664-b5e7-cc1dc951f658" (UID: "62853806-2bda-4664-b5e7-cc1dc951f658"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.358406 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62853806-2bda-4664-b5e7-cc1dc951f658" (UID: "62853806-2bda-4664-b5e7-cc1dc951f658"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.426860 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.426919 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jnkc\" (UniqueName: \"kubernetes.io/projected/62853806-2bda-4664-b5e7-cc1dc951f658-kube-api-access-9jnkc\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.426930 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62853806-2bda-4664-b5e7-cc1dc951f658-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:02 crc kubenswrapper[4979]: W0130 22:05:02.546973 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb45ea9a1_6c1f_4719_8432_2add7fdef96d.slice/crio-cb1a59203ab85e4b8be1f22657c8e3ce137007d98b95b2249b478cc2e64ec70a WatchSource:0}: Error finding container cb1a59203ab85e4b8be1f22657c8e3ce137007d98b95b2249b478cc2e64ec70a: Status 404 returned error can't find the container with id cb1a59203ab85e4b8be1f22657c8e3ce137007d98b95b2249b478cc2e64ec70a Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.548877 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.589046 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"62853806-2bda-4664-b5e7-cc1dc951f658","Type":"ContainerDied","Data":"34e4ecb3720c18b8e6cdeb76dc056d9810bc42be63e75ebe0aec06bf1bbc4605"} Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.589118 4979 scope.go:117] "RemoveContainer" containerID="fc49bd9a3e8b9707d33bcd57d31e07c561b449531292a4b7a75ae419cab20f8d" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.589069 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.590740 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerStarted","Data":"cb1a59203ab85e4b8be1f22657c8e3ce137007d98b95b2249b478cc2e64ec70a"} Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.635589 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.663841 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.699522 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: E0130 22:05:02.700791 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62853806-2bda-4664-b5e7-cc1dc951f658" containerName="nova-scheduler-scheduler" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.700814 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="62853806-2bda-4664-b5e7-cc1dc951f658" containerName="nova-scheduler-scheduler" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.701491 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="62853806-2bda-4664-b5e7-cc1dc951f658" containerName="nova-scheduler-scheduler" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.702997 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.706072 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.727393 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.733735 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.733799 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcf2\" (UniqueName: \"kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.733859 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.762883 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:02 crc kubenswrapper[4979]: W0130 22:05:02.763969 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd48f1e79_1816_4321_ba02_25d28d095a47.slice/crio-28a45c0d84e0def4e750a8392a3aa6209623d49d81b458ceca7124e9d2340fda WatchSource:0}: Error finding container 28a45c0d84e0def4e750a8392a3aa6209623d49d81b458ceca7124e9d2340fda: Status 404 returned error can't find the container with id 28a45c0d84e0def4e750a8392a3aa6209623d49d81b458ceca7124e9d2340fda Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.836734 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.836824 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcf2\" (UniqueName: \"kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.836905 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.842880 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.843782 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:02 crc kubenswrapper[4979]: I0130 22:05:02.855356 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcf2\" (UniqueName: \"kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2\") pod \"nova-scheduler-0\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.025780 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.085764 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7c7aee-d4ea-4138-8bbc-72e985b3efc7" path="/var/lib/kubelet/pods/3c7c7aee-d4ea-4138-8bbc-72e985b3efc7/volumes" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.086349 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62853806-2bda-4664-b5e7-cc1dc951f658" path="/var/lib/kubelet/pods/62853806-2bda-4664-b5e7-cc1dc951f658/volumes" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.087401 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe19f1e0-5b59-46b5-a88c-eb1600e144ca" path="/var/lib/kubelet/pods/fe19f1e0-5b59-46b5-a88c-eb1600e144ca/volumes" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.531065 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:03 crc kubenswrapper[4979]: W0130 22:05:03.535615 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4df90142_0487_4f26_8fb8_4ea21cda53d5.slice/crio-7b84177e95b1b0a6d39f6b6ff9de05d3f93d855b4f18c28d8844d6758839a5e7 WatchSource:0}: Error finding container 7b84177e95b1b0a6d39f6b6ff9de05d3f93d855b4f18c28d8844d6758839a5e7: Status 404 returned error can't find the container with id 7b84177e95b1b0a6d39f6b6ff9de05d3f93d855b4f18c28d8844d6758839a5e7 Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.603195 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4df90142-0487-4f26-8fb8-4ea21cda53d5","Type":"ContainerStarted","Data":"7b84177e95b1b0a6d39f6b6ff9de05d3f93d855b4f18c28d8844d6758839a5e7"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.605258 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerStarted","Data":"356d4372d8c5225081f000b89512382cdbfb9f751a63bb77f25fb1fa6f8b6835"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.605292 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerStarted","Data":"57384d21f71f60e665c1ed1b7a019bcb0982b43a15a6940f08c2f451c4df3380"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.605305 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerStarted","Data":"28a45c0d84e0def4e750a8392a3aa6209623d49d81b458ceca7124e9d2340fda"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.609347 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerStarted","Data":"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.609385 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerStarted","Data":"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e"} Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.641928 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.641903256 podStartE2EDuration="2.641903256s" podCreationTimestamp="2026-01-30 22:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:03.628446574 +0000 UTC m=+1499.589693627" watchObservedRunningTime="2026-01-30 22:05:03.641903256 +0000 UTC m=+1499.603150289" Jan 30 22:05:03 crc kubenswrapper[4979]: I0130 22:05:03.661804 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.661784958 podStartE2EDuration="2.661784958s" podCreationTimestamp="2026-01-30 22:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:03.656120207 +0000 UTC m=+1499.617367230" watchObservedRunningTime="2026-01-30 22:05:03.661784958 +0000 UTC m=+1499.623031991" Jan 30 22:05:04 crc kubenswrapper[4979]: I0130 22:05:04.644973 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4df90142-0487-4f26-8fb8-4ea21cda53d5","Type":"ContainerStarted","Data":"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed"} Jan 30 22:05:04 crc kubenswrapper[4979]: I0130 22:05:04.686554 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.686526664 podStartE2EDuration="2.686526664s" podCreationTimestamp="2026-01-30 22:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:04.676602108 +0000 UTC m=+1500.637849131" watchObservedRunningTime="2026-01-30 22:05:04.686526664 +0000 UTC m=+1500.647773687" Jan 30 22:05:06 crc kubenswrapper[4979]: I0130 22:05:06.671011 4979 generic.go:334] "Generic (PLEG): container finished" podID="181d93b8-d7d4-4184-beb4-f4e96f221af5" containerID="03fcd58bcede39bf0ce2578dd97f75b5dfefffae36f69c196076f3970b1d584e" exitCode=0 Jan 30 22:05:06 crc kubenswrapper[4979]: I0130 22:05:06.671109 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gfv78" event={"ID":"181d93b8-d7d4-4184-beb4-f4e96f221af5","Type":"ContainerDied","Data":"03fcd58bcede39bf0ce2578dd97f75b5dfefffae36f69c196076f3970b1d584e"} Jan 30 22:05:07 crc kubenswrapper[4979]: I0130 22:05:07.027293 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 22:05:07 crc kubenswrapper[4979]: I0130 22:05:07.027387 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.026766 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.070639 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.156151 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data\") pod \"181d93b8-d7d4-4184-beb4-f4e96f221af5\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.156270 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk2lm\" (UniqueName: \"kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm\") pod \"181d93b8-d7d4-4184-beb4-f4e96f221af5\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.156355 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts\") pod \"181d93b8-d7d4-4184-beb4-f4e96f221af5\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.156462 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle\") pod \"181d93b8-d7d4-4184-beb4-f4e96f221af5\" (UID: \"181d93b8-d7d4-4184-beb4-f4e96f221af5\") " Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.162320 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm" (OuterVolumeSpecName: "kube-api-access-tk2lm") pod "181d93b8-d7d4-4184-beb4-f4e96f221af5" (UID: "181d93b8-d7d4-4184-beb4-f4e96f221af5"). InnerVolumeSpecName "kube-api-access-tk2lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.163470 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts" (OuterVolumeSpecName: "scripts") pod "181d93b8-d7d4-4184-beb4-f4e96f221af5" (UID: "181d93b8-d7d4-4184-beb4-f4e96f221af5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.188740 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data" (OuterVolumeSpecName: "config-data") pod "181d93b8-d7d4-4184-beb4-f4e96f221af5" (UID: "181d93b8-d7d4-4184-beb4-f4e96f221af5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.189978 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "181d93b8-d7d4-4184-beb4-f4e96f221af5" (UID: "181d93b8-d7d4-4184-beb4-f4e96f221af5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.259014 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.259425 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.259445 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk2lm\" (UniqueName: \"kubernetes.io/projected/181d93b8-d7d4-4184-beb4-f4e96f221af5-kube-api-access-tk2lm\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.259458 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/181d93b8-d7d4-4184-beb4-f4e96f221af5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.692777 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gfv78" event={"ID":"181d93b8-d7d4-4184-beb4-f4e96f221af5","Type":"ContainerDied","Data":"8439fa81627ed0d7327a33566a06586c473b5bde902c6eee485f6d5ed225dc1e"} Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.693241 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8439fa81627ed0d7327a33566a06586c473b5bde902c6eee485f6d5ed225dc1e" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.692917 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gfv78" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.806370 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:05:08 crc kubenswrapper[4979]: E0130 22:05:08.806903 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181d93b8-d7d4-4184-beb4-f4e96f221af5" containerName="nova-cell1-conductor-db-sync" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.806927 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="181d93b8-d7d4-4184-beb4-f4e96f221af5" containerName="nova-cell1-conductor-db-sync" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.807213 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="181d93b8-d7d4-4184-beb4-f4e96f221af5" containerName="nova-cell1-conductor-db-sync" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.808093 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.818873 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.823619 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.871904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.871997 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcsgr\" (UniqueName: \"kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.872107 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.974316 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.974414 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcsgr\" (UniqueName: \"kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.974449 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.986090 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.986132 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:08 crc kubenswrapper[4979]: I0130 22:05:08.993335 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcsgr\" (UniqueName: \"kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr\") pod \"nova-cell1-conductor-0\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:09 crc kubenswrapper[4979]: I0130 22:05:09.168495 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:09 crc kubenswrapper[4979]: I0130 22:05:09.421358 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 22:05:09 crc kubenswrapper[4979]: I0130 22:05:09.642381 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:05:09 crc kubenswrapper[4979]: W0130 22:05:09.647193 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f627a1e_42e6_4af6_90f1_750c01bcf076.slice/crio-a0de9700bb7fcf5a664741b82e8a5660815e5d09636e24070c5df5ee3f5b2854 WatchSource:0}: Error finding container a0de9700bb7fcf5a664741b82e8a5660815e5d09636e24070c5df5ee3f5b2854: Status 404 returned error can't find the container with id a0de9700bb7fcf5a664741b82e8a5660815e5d09636e24070c5df5ee3f5b2854 Jan 30 22:05:09 crc kubenswrapper[4979]: I0130 22:05:09.705452 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2f627a1e-42e6-4af6-90f1-750c01bcf076","Type":"ContainerStarted","Data":"a0de9700bb7fcf5a664741b82e8a5660815e5d09636e24070c5df5ee3f5b2854"} Jan 30 22:05:10 crc kubenswrapper[4979]: I0130 22:05:10.721600 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2f627a1e-42e6-4af6-90f1-750c01bcf076","Type":"ContainerStarted","Data":"d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918"} Jan 30 22:05:10 crc kubenswrapper[4979]: I0130 22:05:10.722213 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:10 crc kubenswrapper[4979]: I0130 22:05:10.748983 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.7489494309999998 podStartE2EDuration="2.748949431s" podCreationTimestamp="2026-01-30 22:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:10.743911566 +0000 UTC m=+1506.705158609" watchObservedRunningTime="2026-01-30 22:05:10.748949431 +0000 UTC m=+1506.710196464" Jan 30 22:05:12 crc kubenswrapper[4979]: I0130 22:05:12.028976 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 22:05:12 crc kubenswrapper[4979]: I0130 22:05:12.029414 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 22:05:12 crc kubenswrapper[4979]: I0130 22:05:12.291224 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:05:12 crc kubenswrapper[4979]: I0130 22:05:12.291280 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.025960 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.032468 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.037914 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.215927 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.261469 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.261747 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="802f295d-d208-4750-ab9b-c3886cb30091" containerName="kube-state-metrics" containerID="cri-o://20c28cbb64eeb54902f8d83f5e5ce1cb0b5f0534acb2d87e4d7c5f48e86998df" gracePeriod=30 Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.373295 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.373299 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:13 crc kubenswrapper[4979]: I0130 22:05:13.810515 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.196538 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.767756 4979 generic.go:334] "Generic (PLEG): container finished" podID="802f295d-d208-4750-ab9b-c3886cb30091" containerID="20c28cbb64eeb54902f8d83f5e5ce1cb0b5f0534acb2d87e4d7c5f48e86998df" exitCode=2 Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.767828 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"802f295d-d208-4750-ab9b-c3886cb30091","Type":"ContainerDied","Data":"20c28cbb64eeb54902f8d83f5e5ce1cb0b5f0534acb2d87e4d7c5f48e86998df"} Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.767872 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"802f295d-d208-4750-ab9b-c3886cb30091","Type":"ContainerDied","Data":"073da3757392885be51de106d5a842ae9944cc19e4dc0f6b4686c2786716c716"} Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.767887 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073da3757392885be51de106d5a842ae9944cc19e4dc0f6b4686c2786716c716" Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.795562 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.925588 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntbh6\" (UniqueName: \"kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6\") pod \"802f295d-d208-4750-ab9b-c3886cb30091\" (UID: \"802f295d-d208-4750-ab9b-c3886cb30091\") " Jan 30 22:05:14 crc kubenswrapper[4979]: I0130 22:05:14.944817 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6" (OuterVolumeSpecName: "kube-api-access-ntbh6") pod "802f295d-d208-4750-ab9b-c3886cb30091" (UID: "802f295d-d208-4750-ab9b-c3886cb30091"). InnerVolumeSpecName "kube-api-access-ntbh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.027846 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntbh6\" (UniqueName: \"kubernetes.io/projected/802f295d-d208-4750-ab9b-c3886cb30091-kube-api-access-ntbh6\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.580058 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.580423 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-central-agent" containerID="cri-o://4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428" gracePeriod=30 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.580595 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="proxy-httpd" containerID="cri-o://a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74" gracePeriod=30 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.580616 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="sg-core" containerID="cri-o://745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393" gracePeriod=30 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.582198 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-notification-agent" containerID="cri-o://69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d" gracePeriod=30 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.782546 4979 generic.go:334] "Generic (PLEG): container finished" podID="735d6952-ef80-442e-b87b-a32834aa4acb" containerID="a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74" exitCode=0 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.782794 4979 generic.go:334] "Generic (PLEG): container finished" podID="735d6952-ef80-442e-b87b-a32834aa4acb" containerID="745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393" exitCode=2 Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.783016 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.782624 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerDied","Data":"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74"} Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.783818 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerDied","Data":"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393"} Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.823229 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.836613 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.850456 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:15 crc kubenswrapper[4979]: E0130 22:05:15.851098 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802f295d-d208-4750-ab9b-c3886cb30091" containerName="kube-state-metrics" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.851122 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="802f295d-d208-4750-ab9b-c3886cb30091" containerName="kube-state-metrics" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.851364 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="802f295d-d208-4750-ab9b-c3886cb30091" containerName="kube-state-metrics" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.852350 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.856545 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.856980 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.861956 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.951422 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.951578 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9nn\" (UniqueName: \"kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.951626 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:15 crc kubenswrapper[4979]: I0130 22:05:15.951727 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.056076 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.056161 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.056240 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp9nn\" (UniqueName: \"kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.056267 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.071727 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.077239 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.097069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.110020 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp9nn\" (UniqueName: \"kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn\") pod \"kube-state-metrics-0\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.171204 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.756168 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:05:16 crc kubenswrapper[4979]: W0130 22:05:16.771075 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe5eba1b_535d_4519_97c5_5e8b8f003d96.slice/crio-e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561 WatchSource:0}: Error finding container e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561: Status 404 returned error can't find the container with id e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561 Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.775829 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.799652 4979 generic.go:334] "Generic (PLEG): container finished" podID="735d6952-ef80-442e-b87b-a32834aa4acb" containerID="4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428" exitCode=0 Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.799721 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerDied","Data":"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428"} Jan 30 22:05:16 crc kubenswrapper[4979]: I0130 22:05:16.804516 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fe5eba1b-535d-4519-97c5-5e8b8f003d96","Type":"ContainerStarted","Data":"e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561"} Jan 30 22:05:17 crc kubenswrapper[4979]: I0130 22:05:17.090763 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802f295d-d208-4750-ab9b-c3886cb30091" path="/var/lib/kubelet/pods/802f295d-d208-4750-ab9b-c3886cb30091/volumes" Jan 30 22:05:17 crc kubenswrapper[4979]: I0130 22:05:17.818408 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fe5eba1b-535d-4519-97c5-5e8b8f003d96","Type":"ContainerStarted","Data":"10c1f71e257099ef965fe8ed07f831aabf20fafa7023702d589fe76aa2e8e755"} Jan 30 22:05:17 crc kubenswrapper[4979]: I0130 22:05:17.819008 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 22:05:17 crc kubenswrapper[4979]: I0130 22:05:17.846082 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.46188223 podStartE2EDuration="2.84605993s" podCreationTimestamp="2026-01-30 22:05:15 +0000 UTC" firstStartedPulling="2026-01-30 22:05:16.775364222 +0000 UTC m=+1512.736611295" lastFinishedPulling="2026-01-30 22:05:17.159541962 +0000 UTC m=+1513.120788995" observedRunningTime="2026-01-30 22:05:17.841363814 +0000 UTC m=+1513.802610917" watchObservedRunningTime="2026-01-30 22:05:17.84605993 +0000 UTC m=+1513.807306963" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.823359 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.835255 4979 generic.go:334] "Generic (PLEG): container finished" podID="735d6952-ef80-442e-b87b-a32834aa4acb" containerID="69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d" exitCode=0 Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.835372 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.835433 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerDied","Data":"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d"} Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.835476 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735d6952-ef80-442e-b87b-a32834aa4acb","Type":"ContainerDied","Data":"01c6464a8f040a12abc8ff599cbfa55d11072c8f6eee4cfc9c902ea1c0c52c3a"} Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.835513 4979 scope.go:117] "RemoveContainer" containerID="a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.875651 4979 scope.go:117] "RemoveContainer" containerID="745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.916130 4979 scope.go:117] "RemoveContainer" containerID="69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933544 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd572\" (UniqueName: \"kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933581 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933633 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933659 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933680 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933762 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.933859 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd\") pod \"735d6952-ef80-442e-b87b-a32834aa4acb\" (UID: \"735d6952-ef80-442e-b87b-a32834aa4acb\") " Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.936458 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.936691 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.942451 4979 scope.go:117] "RemoveContainer" containerID="4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.945298 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts" (OuterVolumeSpecName: "scripts") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.953957 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572" (OuterVolumeSpecName: "kube-api-access-vd572") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "kube-api-access-vd572". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:18 crc kubenswrapper[4979]: I0130 22:05:18.977659 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.039752 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd572\" (UniqueName: \"kubernetes.io/projected/735d6952-ef80-442e-b87b-a32834aa4acb-kube-api-access-vd572\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.039808 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.039823 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.039837 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.039850 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735d6952-ef80-442e-b87b-a32834aa4acb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.051215 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.075376 4979 scope.go:117] "RemoveContainer" containerID="a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.075450 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data" (OuterVolumeSpecName: "config-data") pod "735d6952-ef80-442e-b87b-a32834aa4acb" (UID: "735d6952-ef80-442e-b87b-a32834aa4acb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.076052 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74\": container with ID starting with a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74 not found: ID does not exist" containerID="a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.076098 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74"} err="failed to get container status \"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74\": rpc error: code = NotFound desc = could not find container \"a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74\": container with ID starting with a10385f540529255f973dcd5059f6ba5841ff3f173dc10802dcb19d53771da74 not found: ID does not exist" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.076129 4979 scope.go:117] "RemoveContainer" containerID="745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.076560 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393\": container with ID starting with 745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393 not found: ID does not exist" containerID="745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.076645 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393"} err="failed to get container status \"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393\": rpc error: code = NotFound desc = could not find container \"745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393\": container with ID starting with 745e9b27814ca6f6654e57ccc440a0abfa043f56724ec2dac6e7f28919c79393 not found: ID does not exist" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.076712 4979 scope.go:117] "RemoveContainer" containerID="69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.077150 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d\": container with ID starting with 69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d not found: ID does not exist" containerID="69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.077185 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d"} err="failed to get container status \"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d\": rpc error: code = NotFound desc = could not find container \"69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d\": container with ID starting with 69ad38ca5a9ec4ff0e224d7e5bf0529efc005d113f5d15a513e820976d5c724d not found: ID does not exist" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.077208 4979 scope.go:117] "RemoveContainer" containerID="4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.084742 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428\": container with ID starting with 4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428 not found: ID does not exist" containerID="4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.084798 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428"} err="failed to get container status \"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428\": rpc error: code = NotFound desc = could not find container \"4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428\": container with ID starting with 4cef65f772a12eb2a65a018f9a918e2b15be227e4ddfb5f3a09e3d2605484428 not found: ID does not exist" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.141913 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.141956 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735d6952-ef80-442e-b87b-a32834aa4acb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.171600 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.190043 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.208140 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.209111 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="sg-core" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.209259 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="sg-core" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.209379 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-central-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.209457 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-central-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.209533 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-notification-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.209610 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-notification-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: E0130 22:05:19.209688 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="proxy-httpd" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.209756 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="proxy-httpd" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.210070 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-notification-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.210179 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="ceilometer-central-agent" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.210277 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="sg-core" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.210369 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" containerName="proxy-httpd" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.212849 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.217461 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.217753 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.217951 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.221176 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346204 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346465 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346629 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346690 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7zvx\" (UniqueName: \"kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346762 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.346919 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.347182 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.347455 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454116 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454243 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454293 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454323 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7zvx\" (UniqueName: \"kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454402 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454471 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454567 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.454828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.455191 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.461377 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.462479 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.465716 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.466900 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.475917 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7zvx\" (UniqueName: \"kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.480919 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " pod="openstack/ceilometer-0" Jan 30 22:05:19 crc kubenswrapper[4979]: I0130 22:05:19.546841 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:20 crc kubenswrapper[4979]: I0130 22:05:20.073778 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:20 crc kubenswrapper[4979]: W0130 22:05:20.077968 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5452d41_b901_4e6c_876c_06c0f44ba8ef.slice/crio-5bd4b671b332990f2d364efb9a8c467802e7a027062cdd8b7e06d3d28be2cece WatchSource:0}: Error finding container 5bd4b671b332990f2d364efb9a8c467802e7a027062cdd8b7e06d3d28be2cece: Status 404 returned error can't find the container with id 5bd4b671b332990f2d364efb9a8c467802e7a027062cdd8b7e06d3d28be2cece Jan 30 22:05:20 crc kubenswrapper[4979]: I0130 22:05:20.877438 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerStarted","Data":"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e"} Jan 30 22:05:20 crc kubenswrapper[4979]: I0130 22:05:20.877768 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerStarted","Data":"5bd4b671b332990f2d364efb9a8c467802e7a027062cdd8b7e06d3d28be2cece"} Jan 30 22:05:21 crc kubenswrapper[4979]: I0130 22:05:21.081590 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735d6952-ef80-442e-b87b-a32834aa4acb" path="/var/lib/kubelet/pods/735d6952-ef80-442e-b87b-a32834aa4acb/volumes" Jan 30 22:05:21 crc kubenswrapper[4979]: I0130 22:05:21.890727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerStarted","Data":"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef"} Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.034519 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.037100 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.041220 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.295066 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.295826 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.296186 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.296210 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.300661 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.304279 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.611940 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.617799 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.656759 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762716 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72jjl\" (UniqueName: \"kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762832 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762871 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762901 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762920 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.762960 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.864847 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.865364 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.865402 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.865424 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.866951 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.867293 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72jjl\" (UniqueName: \"kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.866810 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.865874 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.866637 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.867592 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.867962 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.908426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerStarted","Data":"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0"} Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.908721 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72jjl\" (UniqueName: \"kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl\") pod \"dnsmasq-dns-89c5cd4d5-kdhtr\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.924168 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 22:05:22 crc kubenswrapper[4979]: I0130 22:05:22.996153 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:23 crc kubenswrapper[4979]: I0130 22:05:23.622887 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:05:23 crc kubenswrapper[4979]: I0130 22:05:23.921364 4979 generic.go:334] "Generic (PLEG): container finished" podID="4bae0355-ad11-48d3-a13f-378354677f77" containerID="cb53a0bf80799a9038c0ec96174830f51ef5adf97bb87b1dc554e2dbe52de608" exitCode=0 Jan 30 22:05:23 crc kubenswrapper[4979]: I0130 22:05:23.921437 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" event={"ID":"4bae0355-ad11-48d3-a13f-378354677f77","Type":"ContainerDied","Data":"cb53a0bf80799a9038c0ec96174830f51ef5adf97bb87b1dc554e2dbe52de608"} Jan 30 22:05:23 crc kubenswrapper[4979]: I0130 22:05:23.921765 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" event={"ID":"4bae0355-ad11-48d3-a13f-378354677f77","Type":"ContainerStarted","Data":"fcd7f766ab345ea2e8c0ac6bd8fb4c89c2192ee2d80ef64d952c822915831fd5"} Jan 30 22:05:24 crc kubenswrapper[4979]: I0130 22:05:24.985610 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerStarted","Data":"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7"} Jan 30 22:05:24 crc kubenswrapper[4979]: I0130 22:05:24.986296 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:05:24 crc kubenswrapper[4979]: I0130 22:05:24.994323 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" event={"ID":"4bae0355-ad11-48d3-a13f-378354677f77","Type":"ContainerStarted","Data":"68738a2810356039fe36b036d04e6e47dff0836ae08b737f9907c8607fb78312"} Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.020646 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.623725495 podStartE2EDuration="6.020614815s" podCreationTimestamp="2026-01-30 22:05:19 +0000 UTC" firstStartedPulling="2026-01-30 22:05:20.086694156 +0000 UTC m=+1516.047941189" lastFinishedPulling="2026-01-30 22:05:24.483583446 +0000 UTC m=+1520.444830509" observedRunningTime="2026-01-30 22:05:25.010937446 +0000 UTC m=+1520.972184479" watchObservedRunningTime="2026-01-30 22:05:25.020614815 +0000 UTC m=+1520.981861848" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.036880 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" podStartSLOduration=3.03686073 podStartE2EDuration="3.03686073s" podCreationTimestamp="2026-01-30 22:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:25.030921521 +0000 UTC m=+1520.992168554" watchObservedRunningTime="2026-01-30 22:05:25.03686073 +0000 UTC m=+1520.998107763" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.603984 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.606626 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.630937 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.772515 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.772671 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.772710 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s4vj\" (UniqueName: \"kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.842826 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.843124 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-log" containerID="cri-o://57384d21f71f60e665c1ed1b7a019bcb0982b43a15a6940f08c2f451c4df3380" gracePeriod=30 Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.844342 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-api" containerID="cri-o://356d4372d8c5225081f000b89512382cdbfb9f751a63bb77f25fb1fa6f8b6835" gracePeriod=30 Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.874883 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.875083 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.875125 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s4vj\" (UniqueName: \"kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.875943 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.883403 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.919121 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s4vj\" (UniqueName: \"kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj\") pod \"redhat-operators-bsf45\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:25 crc kubenswrapper[4979]: I0130 22:05:25.935046 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:26 crc kubenswrapper[4979]: I0130 22:05:26.024740 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:26 crc kubenswrapper[4979]: I0130 22:05:26.222715 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 22:05:26 crc kubenswrapper[4979]: I0130 22:05:26.625384 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:26 crc kubenswrapper[4979]: W0130 22:05:26.649389 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f682a99_2265_4234_a19c_01f62262e96b.slice/crio-883aad42c1cc3c7dd42fc0902f4d5edbb27e24722c84dc3e7f6c90f2fbf73ecb WatchSource:0}: Error finding container 883aad42c1cc3c7dd42fc0902f4d5edbb27e24722c84dc3e7f6c90f2fbf73ecb: Status 404 returned error can't find the container with id 883aad42c1cc3c7dd42fc0902f4d5edbb27e24722c84dc3e7f6c90f2fbf73ecb Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.036600 4979 generic.go:334] "Generic (PLEG): container finished" podID="9f682a99-2265-4234-a19c-01f62262e96b" containerID="f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d" exitCode=0 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.036666 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerDied","Data":"f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d"} Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.037159 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerStarted","Data":"883aad42c1cc3c7dd42fc0902f4d5edbb27e24722c84dc3e7f6c90f2fbf73ecb"} Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.043690 4979 generic.go:334] "Generic (PLEG): container finished" podID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" containerID="4bd5fadc7d49f6d0917b463f6bb16e126a837db96ddad54cd74e72ea4b07d33a" exitCode=137 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.043833 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50eca4bc-cd69-4cce-a995-ac34fbcd5edd","Type":"ContainerDied","Data":"4bd5fadc7d49f6d0917b463f6bb16e126a837db96ddad54cd74e72ea4b07d33a"} Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.044146 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50eca4bc-cd69-4cce-a995-ac34fbcd5edd","Type":"ContainerDied","Data":"418ba1031b4d4e3f1080f6d157787ad41890fff919486dd4b096f4ae99738787"} Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.044164 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="418ba1031b4d4e3f1080f6d157787ad41890fff919486dd4b096f4ae99738787" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.047980 4979 generic.go:334] "Generic (PLEG): container finished" podID="d48f1e79-1816-4321-ba02-25d28d095a47" containerID="57384d21f71f60e665c1ed1b7a019bcb0982b43a15a6940f08c2f451c4df3380" exitCode=143 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.048081 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerDied","Data":"57384d21f71f60e665c1ed1b7a019bcb0982b43a15a6940f08c2f451c4df3380"} Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.091325 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.223323 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data\") pod \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.223409 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle\") pod \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.223749 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2grt\" (UniqueName: \"kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt\") pod \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\" (UID: \"50eca4bc-cd69-4cce-a995-ac34fbcd5edd\") " Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.268182 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt" (OuterVolumeSpecName: "kube-api-access-m2grt") pod "50eca4bc-cd69-4cce-a995-ac34fbcd5edd" (UID: "50eca4bc-cd69-4cce-a995-ac34fbcd5edd"). InnerVolumeSpecName "kube-api-access-m2grt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.272121 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data" (OuterVolumeSpecName: "config-data") pod "50eca4bc-cd69-4cce-a995-ac34fbcd5edd" (UID: "50eca4bc-cd69-4cce-a995-ac34fbcd5edd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.298721 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50eca4bc-cd69-4cce-a995-ac34fbcd5edd" (UID: "50eca4bc-cd69-4cce-a995-ac34fbcd5edd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.315105 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.315442 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-central-agent" containerID="cri-o://1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e" gracePeriod=30 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.315578 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="proxy-httpd" containerID="cri-o://9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7" gracePeriod=30 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.315647 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-notification-agent" containerID="cri-o://1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef" gracePeriod=30 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.316927 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="sg-core" containerID="cri-o://010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0" gracePeriod=30 Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.327306 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2grt\" (UniqueName: \"kubernetes.io/projected/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-kube-api-access-m2grt\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.327356 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:27 crc kubenswrapper[4979]: I0130 22:05:27.327372 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50eca4bc-cd69-4cce-a995-ac34fbcd5edd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.076504 4979 generic.go:334] "Generic (PLEG): container finished" podID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerID="9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7" exitCode=0 Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077064 4979 generic.go:334] "Generic (PLEG): container finished" podID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerID="010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0" exitCode=2 Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077075 4979 generic.go:334] "Generic (PLEG): container finished" podID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerID="1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef" exitCode=0 Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077159 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077700 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerDied","Data":"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7"} Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077762 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerDied","Data":"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0"} Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.077772 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerDied","Data":"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef"} Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.133195 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.150018 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.171862 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:05:28 crc kubenswrapper[4979]: E0130 22:05:28.204705 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.204763 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.204999 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.205770 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.205878 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.210049 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.212416 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.212463 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.351322 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.351410 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.351499 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.351851 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2vbh\" (UniqueName: \"kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.351921 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.454351 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2vbh\" (UniqueName: \"kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.454678 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.454831 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.454925 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.455101 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.463141 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.464367 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.465789 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.476052 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.484671 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2vbh\" (UniqueName: \"kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh\") pod \"nova-cell1-novncproxy-0\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:28 crc kubenswrapper[4979]: I0130 22:05:28.531516 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:29 crc kubenswrapper[4979]: I0130 22:05:29.084594 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50eca4bc-cd69-4cce-a995-ac34fbcd5edd" path="/var/lib/kubelet/pods/50eca4bc-cd69-4cce-a995-ac34fbcd5edd/volumes" Jan 30 22:05:29 crc kubenswrapper[4979]: I0130 22:05:29.086416 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:05:29 crc kubenswrapper[4979]: W0130 22:05:29.087331 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95748319_965e_49d8_8a00_c0bc1025337d.slice/crio-e4ebf1d98c1bb7fabf7f4934a42326f7066ad90f4a383e7cfc22047a4c8c52a0 WatchSource:0}: Error finding container e4ebf1d98c1bb7fabf7f4934a42326f7066ad90f4a383e7cfc22047a4c8c52a0: Status 404 returned error can't find the container with id e4ebf1d98c1bb7fabf7f4934a42326f7066ad90f4a383e7cfc22047a4c8c52a0 Jan 30 22:05:29 crc kubenswrapper[4979]: I0130 22:05:29.095971 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerStarted","Data":"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df"} Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.175401 4979 generic.go:334] "Generic (PLEG): container finished" podID="d48f1e79-1816-4321-ba02-25d28d095a47" containerID="356d4372d8c5225081f000b89512382cdbfb9f751a63bb77f25fb1fa6f8b6835" exitCode=0 Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.176412 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerDied","Data":"356d4372d8c5225081f000b89512382cdbfb9f751a63bb77f25fb1fa6f8b6835"} Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.183348 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"95748319-965e-49d8-8a00-c0bc1025337d","Type":"ContainerStarted","Data":"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b"} Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.183390 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"95748319-965e-49d8-8a00-c0bc1025337d","Type":"ContainerStarted","Data":"e4ebf1d98c1bb7fabf7f4934a42326f7066ad90f4a383e7cfc22047a4c8c52a0"} Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.215544 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.215508501 podStartE2EDuration="2.215508501s" podCreationTimestamp="2026-01-30 22:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:30.208420821 +0000 UTC m=+1526.169667864" watchObservedRunningTime="2026-01-30 22:05:30.215508501 +0000 UTC m=+1526.176755534" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.454481 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.504789 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs\") pod \"d48f1e79-1816-4321-ba02-25d28d095a47\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.504874 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle\") pod \"d48f1e79-1816-4321-ba02-25d28d095a47\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.504923 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxzgr\" (UniqueName: \"kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr\") pod \"d48f1e79-1816-4321-ba02-25d28d095a47\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.504987 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data\") pod \"d48f1e79-1816-4321-ba02-25d28d095a47\" (UID: \"d48f1e79-1816-4321-ba02-25d28d095a47\") " Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.512839 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs" (OuterVolumeSpecName: "logs") pod "d48f1e79-1816-4321-ba02-25d28d095a47" (UID: "d48f1e79-1816-4321-ba02-25d28d095a47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.517632 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr" (OuterVolumeSpecName: "kube-api-access-vxzgr") pod "d48f1e79-1816-4321-ba02-25d28d095a47" (UID: "d48f1e79-1816-4321-ba02-25d28d095a47"). InnerVolumeSpecName "kube-api-access-vxzgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.552462 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data" (OuterVolumeSpecName: "config-data") pod "d48f1e79-1816-4321-ba02-25d28d095a47" (UID: "d48f1e79-1816-4321-ba02-25d28d095a47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.562103 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d48f1e79-1816-4321-ba02-25d28d095a47" (UID: "d48f1e79-1816-4321-ba02-25d28d095a47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.607063 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d48f1e79-1816-4321-ba02-25d28d095a47-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.607116 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.607130 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxzgr\" (UniqueName: \"kubernetes.io/projected/d48f1e79-1816-4321-ba02-25d28d095a47-kube-api-access-vxzgr\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:30 crc kubenswrapper[4979]: I0130 22:05:30.607140 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d48f1e79-1816-4321-ba02-25d28d095a47-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.170633 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.194484 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d48f1e79-1816-4321-ba02-25d28d095a47","Type":"ContainerDied","Data":"28a45c0d84e0def4e750a8392a3aa6209623d49d81b458ceca7124e9d2340fda"} Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.194555 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.194592 4979 scope.go:117] "RemoveContainer" containerID="356d4372d8c5225081f000b89512382cdbfb9f751a63bb77f25fb1fa6f8b6835" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.197019 4979 generic.go:334] "Generic (PLEG): container finished" podID="9f682a99-2265-4234-a19c-01f62262e96b" containerID="3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df" exitCode=0 Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.197089 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerDied","Data":"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df"} Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.214890 4979 generic.go:334] "Generic (PLEG): container finished" podID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerID="1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e" exitCode=0 Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.216254 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.216389 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerDied","Data":"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e"} Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.216422 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d5452d41-b901-4e6c-876c-06c0f44ba8ef","Type":"ContainerDied","Data":"5bd4b671b332990f2d364efb9a8c467802e7a027062cdd8b7e06d3d28be2cece"} Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224531 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224660 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7zvx\" (UniqueName: \"kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224759 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224795 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224860 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224904 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.224978 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.225175 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml\") pod \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\" (UID: \"d5452d41-b901-4e6c-876c-06c0f44ba8ef\") " Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.226491 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.227459 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.237286 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts" (OuterVolumeSpecName: "scripts") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.244168 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx" (OuterVolumeSpecName: "kube-api-access-r7zvx") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "kube-api-access-r7zvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.248336 4979 scope.go:117] "RemoveContainer" containerID="57384d21f71f60e665c1ed1b7a019bcb0982b43a15a6940f08c2f451c4df3380" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.314718 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.318314 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.324732 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333118 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333174 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5452d41-b901-4e6c-876c-06c0f44ba8ef-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333192 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333205 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7zvx\" (UniqueName: \"kubernetes.io/projected/d5452d41-b901-4e6c-876c-06c0f44ba8ef-kube-api-access-r7zvx\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333218 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.333228 4979 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.348146 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.370274 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.371103 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-api" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.371134 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-api" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.371158 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-central-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.371168 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-central-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.371186 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-notification-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.371195 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-notification-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.371210 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-log" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.371221 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-log" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.371245 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="proxy-httpd" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.371255 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="proxy-httpd" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.373530 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="sg-core" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.373575 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="sg-core" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.373967 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-notification-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.373999 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="sg-core" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.374014 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-api" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.374059 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="proxy-httpd" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.374079 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" containerName="ceilometer-central-agent" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.374095 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" containerName="nova-api-log" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.379726 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.383331 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.383675 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.384728 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.397895 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.424988 4979 scope.go:117] "RemoveContainer" containerID="9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.431122 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.437848 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.437973 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.438149 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vt8v\" (UniqueName: \"kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.438246 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.438339 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.438366 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.438474 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.456009 4979 scope.go:117] "RemoveContainer" containerID="010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.457393 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data" (OuterVolumeSpecName: "config-data") pod "d5452d41-b901-4e6c-876c-06c0f44ba8ef" (UID: "d5452d41-b901-4e6c-876c-06c0f44ba8ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.501337 4979 scope.go:117] "RemoveContainer" containerID="1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.526780 4979 scope.go:117] "RemoveContainer" containerID="1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.544800 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.544961 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vt8v\" (UniqueName: \"kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545237 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545259 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545435 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545497 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5452d41-b901-4e6c-876c-06c0f44ba8ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.545835 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.553295 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.554854 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.555142 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.557269 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.558264 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.562281 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.567267 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vt8v\" (UniqueName: \"kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v\") pod \"nova-api-0\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.588157 4979 scope.go:117] "RemoveContainer" containerID="9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.588856 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7\": container with ID starting with 9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7 not found: ID does not exist" containerID="9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.588892 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7"} err="failed to get container status \"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7\": rpc error: code = NotFound desc = could not find container \"9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7\": container with ID starting with 9860b2523b11d2cce10e2e8637a49873300a51af24bdaab9a46bb33d3637e6c7 not found: ID does not exist" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.588917 4979 scope.go:117] "RemoveContainer" containerID="010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.589197 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0\": container with ID starting with 010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0 not found: ID does not exist" containerID="010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.589225 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0"} err="failed to get container status \"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0\": rpc error: code = NotFound desc = could not find container \"010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0\": container with ID starting with 010acd89e35467aee58ffc63d31d84a9a30ea0ce835e8f5ce26bc8e3bb6855b0 not found: ID does not exist" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.589239 4979 scope.go:117] "RemoveContainer" containerID="1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.591681 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef\": container with ID starting with 1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef not found: ID does not exist" containerID="1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.591716 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef"} err="failed to get container status \"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef\": rpc error: code = NotFound desc = could not find container \"1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef\": container with ID starting with 1b48d3d1934f2bc1d7efe33b71669bdac5fd97eee9d590bb6677be68143a27ef not found: ID does not exist" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.591733 4979 scope.go:117] "RemoveContainer" containerID="1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e" Jan 30 22:05:31 crc kubenswrapper[4979]: E0130 22:05:31.592623 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e\": container with ID starting with 1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e not found: ID does not exist" containerID="1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.592655 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e"} err="failed to get container status \"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e\": rpc error: code = NotFound desc = could not find container \"1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e\": container with ID starting with 1a3da758b5c9680c6134879d1ef981846b589580572991ef1f518286878faa9e not found: ID does not exist" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.603209 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.606632 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.612700 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.618568 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.618884 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.622619 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.651393 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.651480 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.651629 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.651778 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xc79\" (UniqueName: \"kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.652188 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.652234 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.652259 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.652414 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.746490 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754657 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754711 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754738 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754813 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754863 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754901 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.754995 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.755021 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xc79\" (UniqueName: \"kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.756673 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.756740 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.763259 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.763888 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.764133 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.766157 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.769723 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.785392 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xc79\" (UniqueName: \"kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79\") pod \"ceilometer-0\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " pod="openstack/ceilometer-0" Jan 30 22:05:31 crc kubenswrapper[4979]: I0130 22:05:31.946283 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.040393 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.040766 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.234524 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerStarted","Data":"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8"} Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.287895 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.314657 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bsf45" podStartSLOduration=2.614654167 podStartE2EDuration="7.314612353s" podCreationTimestamp="2026-01-30 22:05:25 +0000 UTC" firstStartedPulling="2026-01-30 22:05:27.043057571 +0000 UTC m=+1523.004304604" lastFinishedPulling="2026-01-30 22:05:31.743015737 +0000 UTC m=+1527.704262790" observedRunningTime="2026-01-30 22:05:32.254406508 +0000 UTC m=+1528.215653541" watchObservedRunningTime="2026-01-30 22:05:32.314612353 +0000 UTC m=+1528.275859386" Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.474789 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:05:32 crc kubenswrapper[4979]: W0130 22:05:32.475208 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b34adef_df84_42dd_a052_5e543c4182b5.slice/crio-1544871f33799c3038bca6a1237524bb73b783f1c5406b279be53a7e8d66904e WatchSource:0}: Error finding container 1544871f33799c3038bca6a1237524bb73b783f1c5406b279be53a7e8d66904e: Status 404 returned error can't find the container with id 1544871f33799c3038bca6a1237524bb73b783f1c5406b279be53a7e8d66904e Jan 30 22:05:32 crc kubenswrapper[4979]: I0130 22:05:32.998262 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.080025 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.080400 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="dnsmasq-dns" containerID="cri-o://b7bcfd864469b2db27c19576fcc10b62425238ba1c0620d37863dcb933d25457" gracePeriod=10 Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.125885 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48f1e79-1816-4321-ba02-25d28d095a47" path="/var/lib/kubelet/pods/d48f1e79-1816-4321-ba02-25d28d095a47/volumes" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.127174 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5452d41-b901-4e6c-876c-06c0f44ba8ef" path="/var/lib/kubelet/pods/d5452d41-b901-4e6c-876c-06c0f44ba8ef/volumes" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.304055 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerStarted","Data":"75ed2e4b32fb1961aa4410d1ed60d78ef4fdaa5313919f801c512171fa44ddd8"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.304570 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerStarted","Data":"92f16fea6d07515ee136c5ba64aa266adb56de1f0255864e495e362a46f2f310"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.304586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerStarted","Data":"94fccc846accac2626b4330c74f1995d347342c1b98a558385ef9d93cbd0d6e8"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.314430 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerStarted","Data":"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.314496 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerStarted","Data":"1544871f33799c3038bca6a1237524bb73b783f1c5406b279be53a7e8d66904e"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.320607 4979 generic.go:334] "Generic (PLEG): container finished" podID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerID="b7bcfd864469b2db27c19576fcc10b62425238ba1c0620d37863dcb933d25457" exitCode=0 Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.320670 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" event={"ID":"c5be09bc-3cf9-443f-bfc7-904e8ed874f8","Type":"ContainerDied","Data":"b7bcfd864469b2db27c19576fcc10b62425238ba1c0620d37863dcb933d25457"} Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.332741 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.33271733 podStartE2EDuration="2.33271733s" podCreationTimestamp="2026-01-30 22:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:33.331979001 +0000 UTC m=+1529.293226034" watchObservedRunningTime="2026-01-30 22:05:33.33271733 +0000 UTC m=+1529.293964363" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.531860 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.826089 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.916772 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.916853 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.916894 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xswj2\" (UniqueName: \"kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.917101 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.917143 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.917185 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0\") pod \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\" (UID: \"c5be09bc-3cf9-443f-bfc7-904e8ed874f8\") " Jan 30 22:05:33 crc kubenswrapper[4979]: I0130 22:05:33.945900 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2" (OuterVolumeSpecName: "kube-api-access-xswj2") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "kube-api-access-xswj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.021617 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xswj2\" (UniqueName: \"kubernetes.io/projected/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-kube-api-access-xswj2\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.043305 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.046604 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.056741 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.059871 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config" (OuterVolumeSpecName: "config") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.063673 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5be09bc-3cf9-443f-bfc7-904e8ed874f8" (UID: "c5be09bc-3cf9-443f-bfc7-904e8ed874f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.123630 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.123823 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.123844 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.123858 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.123874 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5be09bc-3cf9-443f-bfc7-904e8ed874f8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.333625 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.333618 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6p7nr" event={"ID":"c5be09bc-3cf9-443f-bfc7-904e8ed874f8","Type":"ContainerDied","Data":"dad5ecae947304a11e938cd18a6af2bcf48628237b04604b4febaa6b29c4e97a"} Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.334178 4979 scope.go:117] "RemoveContainer" containerID="b7bcfd864469b2db27c19576fcc10b62425238ba1c0620d37863dcb933d25457" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.338869 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerStarted","Data":"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c"} Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.358347 4979 scope.go:117] "RemoveContainer" containerID="013d174f6848cb2abad2b004411d67e5b0bf2bc2e07bdd6263bb0777501bbd65" Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.376310 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:05:34 crc kubenswrapper[4979]: I0130 22:05:34.403722 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6p7nr"] Jan 30 22:05:35 crc kubenswrapper[4979]: I0130 22:05:35.083457 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" path="/var/lib/kubelet/pods/c5be09bc-3cf9-443f-bfc7-904e8ed874f8/volumes" Jan 30 22:05:35 crc kubenswrapper[4979]: I0130 22:05:35.936731 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:35 crc kubenswrapper[4979]: I0130 22:05:35.937250 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:36 crc kubenswrapper[4979]: I0130 22:05:36.377997 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerStarted","Data":"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511"} Jan 30 22:05:36 crc kubenswrapper[4979]: I0130 22:05:36.994505 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bsf45" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="registry-server" probeResult="failure" output=< Jan 30 22:05:36 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 22:05:36 crc kubenswrapper[4979]: > Jan 30 22:05:38 crc kubenswrapper[4979]: I0130 22:05:38.400809 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerStarted","Data":"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed"} Jan 30 22:05:38 crc kubenswrapper[4979]: I0130 22:05:38.401883 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 22:05:38 crc kubenswrapper[4979]: I0130 22:05:38.433751 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.920565781 podStartE2EDuration="7.43371792s" podCreationTimestamp="2026-01-30 22:05:31 +0000 UTC" firstStartedPulling="2026-01-30 22:05:32.47896983 +0000 UTC m=+1528.440216863" lastFinishedPulling="2026-01-30 22:05:37.992121969 +0000 UTC m=+1533.953369002" observedRunningTime="2026-01-30 22:05:38.422263162 +0000 UTC m=+1534.383510195" watchObservedRunningTime="2026-01-30 22:05:38.43371792 +0000 UTC m=+1534.394964953" Jan 30 22:05:38 crc kubenswrapper[4979]: I0130 22:05:38.531865 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:38 crc kubenswrapper[4979]: I0130 22:05:38.555878 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.445990 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.652369 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2zdqm"] Jan 30 22:05:39 crc kubenswrapper[4979]: E0130 22:05:39.653102 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="init" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.653124 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="init" Jan 30 22:05:39 crc kubenswrapper[4979]: E0130 22:05:39.653156 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="dnsmasq-dns" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.653165 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="dnsmasq-dns" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.653452 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5be09bc-3cf9-443f-bfc7-904e8ed874f8" containerName="dnsmasq-dns" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.660649 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.665709 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2zdqm"] Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.668260 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.669600 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.758560 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.758695 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.758816 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlvq\" (UniqueName: \"kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.758928 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.861185 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlvq\" (UniqueName: \"kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.861282 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.861416 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.861495 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.874924 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.874998 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.875690 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.885827 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlvq\" (UniqueName: \"kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq\") pod \"nova-cell1-cell-mapping-2zdqm\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:39 crc kubenswrapper[4979]: I0130 22:05:39.995218 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:40 crc kubenswrapper[4979]: I0130 22:05:40.535670 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2zdqm"] Jan 30 22:05:40 crc kubenswrapper[4979]: W0130 22:05:40.541577 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod707c6502_cbf2_4d94_b032_6d6eeebb581e.slice/crio-f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24 WatchSource:0}: Error finding container f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24: Status 404 returned error can't find the container with id f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24 Jan 30 22:05:41 crc kubenswrapper[4979]: I0130 22:05:41.441244 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2zdqm" event={"ID":"707c6502-cbf2-4d94-b032-6d6eeebb581e","Type":"ContainerStarted","Data":"273d72dd649ce744e0e01b7f87b5608830beff1b94683daf56bbf5dd25211839"} Jan 30 22:05:41 crc kubenswrapper[4979]: I0130 22:05:41.441707 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2zdqm" event={"ID":"707c6502-cbf2-4d94-b032-6d6eeebb581e","Type":"ContainerStarted","Data":"f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24"} Jan 30 22:05:41 crc kubenswrapper[4979]: I0130 22:05:41.469184 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2zdqm" podStartSLOduration=2.469151196 podStartE2EDuration="2.469151196s" podCreationTimestamp="2026-01-30 22:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:41.468145629 +0000 UTC m=+1537.429392682" watchObservedRunningTime="2026-01-30 22:05:41.469151196 +0000 UTC m=+1537.430398229" Jan 30 22:05:41 crc kubenswrapper[4979]: I0130 22:05:41.750368 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:05:41 crc kubenswrapper[4979]: I0130 22:05:41.750418 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:05:42 crc kubenswrapper[4979]: I0130 22:05:42.780357 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:42 crc kubenswrapper[4979]: I0130 22:05:42.780366 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:05:46 crc kubenswrapper[4979]: I0130 22:05:46.014202 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:46 crc kubenswrapper[4979]: I0130 22:05:46.096845 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:46 crc kubenswrapper[4979]: I0130 22:05:46.259599 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:46 crc kubenswrapper[4979]: I0130 22:05:46.497300 4979 generic.go:334] "Generic (PLEG): container finished" podID="707c6502-cbf2-4d94-b032-6d6eeebb581e" containerID="273d72dd649ce744e0e01b7f87b5608830beff1b94683daf56bbf5dd25211839" exitCode=0 Jan 30 22:05:46 crc kubenswrapper[4979]: I0130 22:05:46.497585 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2zdqm" event={"ID":"707c6502-cbf2-4d94-b032-6d6eeebb581e","Type":"ContainerDied","Data":"273d72dd649ce744e0e01b7f87b5608830beff1b94683daf56bbf5dd25211839"} Jan 30 22:05:47 crc kubenswrapper[4979]: I0130 22:05:47.521590 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bsf45" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="registry-server" containerID="cri-o://ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8" gracePeriod=2 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.022892 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.123733 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts\") pod \"707c6502-cbf2-4d94-b032-6d6eeebb581e\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.123799 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data\") pod \"707c6502-cbf2-4d94-b032-6d6eeebb581e\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.123925 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle\") pod \"707c6502-cbf2-4d94-b032-6d6eeebb581e\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.124269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdlvq\" (UniqueName: \"kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq\") pod \"707c6502-cbf2-4d94-b032-6d6eeebb581e\" (UID: \"707c6502-cbf2-4d94-b032-6d6eeebb581e\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.131555 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq" (OuterVolumeSpecName: "kube-api-access-tdlvq") pod "707c6502-cbf2-4d94-b032-6d6eeebb581e" (UID: "707c6502-cbf2-4d94-b032-6d6eeebb581e"). InnerVolumeSpecName "kube-api-access-tdlvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.134728 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts" (OuterVolumeSpecName: "scripts") pod "707c6502-cbf2-4d94-b032-6d6eeebb581e" (UID: "707c6502-cbf2-4d94-b032-6d6eeebb581e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.154262 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.157291 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data" (OuterVolumeSpecName: "config-data") pod "707c6502-cbf2-4d94-b032-6d6eeebb581e" (UID: "707c6502-cbf2-4d94-b032-6d6eeebb581e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.161739 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "707c6502-cbf2-4d94-b032-6d6eeebb581e" (UID: "707c6502-cbf2-4d94-b032-6d6eeebb581e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.227042 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities\") pod \"9f682a99-2265-4234-a19c-01f62262e96b\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.227181 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content\") pod \"9f682a99-2265-4234-a19c-01f62262e96b\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.227352 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s4vj\" (UniqueName: \"kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj\") pod \"9f682a99-2265-4234-a19c-01f62262e96b\" (UID: \"9f682a99-2265-4234-a19c-01f62262e96b\") " Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228341 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities" (OuterVolumeSpecName: "utilities") pod "9f682a99-2265-4234-a19c-01f62262e96b" (UID: "9f682a99-2265-4234-a19c-01f62262e96b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228906 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdlvq\" (UniqueName: \"kubernetes.io/projected/707c6502-cbf2-4d94-b032-6d6eeebb581e-kube-api-access-tdlvq\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228932 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228943 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228956 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707c6502-cbf2-4d94-b032-6d6eeebb581e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.228965 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.231171 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj" (OuterVolumeSpecName: "kube-api-access-8s4vj") pod "9f682a99-2265-4234-a19c-01f62262e96b" (UID: "9f682a99-2265-4234-a19c-01f62262e96b"). InnerVolumeSpecName "kube-api-access-8s4vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.332195 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s4vj\" (UniqueName: \"kubernetes.io/projected/9f682a99-2265-4234-a19c-01f62262e96b-kube-api-access-8s4vj\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.360725 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f682a99-2265-4234-a19c-01f62262e96b" (UID: "9f682a99-2265-4234-a19c-01f62262e96b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.436218 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f682a99-2265-4234-a19c-01f62262e96b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.543530 4979 generic.go:334] "Generic (PLEG): container finished" podID="9f682a99-2265-4234-a19c-01f62262e96b" containerID="ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8" exitCode=0 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.543628 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerDied","Data":"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8"} Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.543670 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsf45" event={"ID":"9f682a99-2265-4234-a19c-01f62262e96b","Type":"ContainerDied","Data":"883aad42c1cc3c7dd42fc0902f4d5edbb27e24722c84dc3e7f6c90f2fbf73ecb"} Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.543685 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsf45" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.543698 4979 scope.go:117] "RemoveContainer" containerID="ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.556274 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2zdqm" event={"ID":"707c6502-cbf2-4d94-b032-6d6eeebb581e","Type":"ContainerDied","Data":"f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24"} Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.556328 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f436ce009d5a774e4911c620605c72b5b9a4529f0d05d0273d575d46035c3a24" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.556432 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2zdqm" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.611292 4979 scope.go:117] "RemoveContainer" containerID="3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.628111 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.642341 4979 scope.go:117] "RemoveContainer" containerID="f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.649528 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bsf45"] Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.718425 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.718811 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-api" containerID="cri-o://75ed2e4b32fb1961aa4410d1ed60d78ef4fdaa5313919f801c512171fa44ddd8" gracePeriod=30 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.719008 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-log" containerID="cri-o://92f16fea6d07515ee136c5ba64aa266adb56de1f0255864e495e362a46f2f310" gracePeriod=30 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.732621 4979 scope.go:117] "RemoveContainer" containerID="ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8" Jan 30 22:05:48 crc kubenswrapper[4979]: E0130 22:05:48.733213 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8\": container with ID starting with ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8 not found: ID does not exist" containerID="ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.733257 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8"} err="failed to get container status \"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8\": rpc error: code = NotFound desc = could not find container \"ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8\": container with ID starting with ad40dbf9ae4db09710b2123fc56a4e3ea69e0e2cbe1da195896c1cf41d34d1a8 not found: ID does not exist" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.733286 4979 scope.go:117] "RemoveContainer" containerID="3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df" Jan 30 22:05:48 crc kubenswrapper[4979]: E0130 22:05:48.733506 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df\": container with ID starting with 3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df not found: ID does not exist" containerID="3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.733532 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df"} err="failed to get container status \"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df\": rpc error: code = NotFound desc = could not find container \"3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df\": container with ID starting with 3dcfeb3cbdde9179a85a73fb829bc59d38205c00750dde9bcd144d0aeba067df not found: ID does not exist" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.733549 4979 scope.go:117] "RemoveContainer" containerID="f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d" Jan 30 22:05:48 crc kubenswrapper[4979]: E0130 22:05:48.733776 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d\": container with ID starting with f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d not found: ID does not exist" containerID="f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.733796 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d"} err="failed to get container status \"f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d\": rpc error: code = NotFound desc = could not find container \"f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d\": container with ID starting with f579b052d8a4b00ceb9df827748d10b433d5e29cab1ff610f8e46dcb4a85685d not found: ID does not exist" Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.762221 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.762536 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4df90142-0487-4f26-8fb8-4ea21cda53d5" containerName="nova-scheduler-scheduler" containerID="cri-o://fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed" gracePeriod=30 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.779185 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.779495 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" containerID="cri-o://bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e" gracePeriod=30 Jan 30 22:05:48 crc kubenswrapper[4979]: I0130 22:05:48.779670 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" containerID="cri-o://5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c" gracePeriod=30 Jan 30 22:05:49 crc kubenswrapper[4979]: I0130 22:05:49.081078 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f682a99-2265-4234-a19c-01f62262e96b" path="/var/lib/kubelet/pods/9f682a99-2265-4234-a19c-01f62262e96b/volumes" Jan 30 22:05:49 crc kubenswrapper[4979]: I0130 22:05:49.571277 4979 generic.go:334] "Generic (PLEG): container finished" podID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerID="bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e" exitCode=143 Jan 30 22:05:49 crc kubenswrapper[4979]: I0130 22:05:49.571396 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerDied","Data":"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e"} Jan 30 22:05:49 crc kubenswrapper[4979]: I0130 22:05:49.575882 4979 generic.go:334] "Generic (PLEG): container finished" podID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerID="92f16fea6d07515ee136c5ba64aa266adb56de1f0255864e495e362a46f2f310" exitCode=143 Jan 30 22:05:49 crc kubenswrapper[4979]: I0130 22:05:49.575936 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerDied","Data":"92f16fea6d07515ee136c5ba64aa266adb56de1f0255864e495e362a46f2f310"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.027709 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": dial tcp 10.217.0.195:8775: connect: connection refused" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.027785 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": dial tcp 10.217.0.195:8775: connect: connection refused" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.485482 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.534480 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data\") pod \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.534559 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs\") pod \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.534595 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle\") pod \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.534811 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb\") pod \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.534867 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs\") pod \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\" (UID: \"b45ea9a1-6c1f-4719-8432-2add7fdef96d\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.535783 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs" (OuterVolumeSpecName: "logs") pod "b45ea9a1-6c1f-4719-8432-2add7fdef96d" (UID: "b45ea9a1-6c1f-4719-8432-2add7fdef96d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.537671 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b45ea9a1-6c1f-4719-8432-2add7fdef96d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.555121 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb" (OuterVolumeSpecName: "kube-api-access-zc9pb") pod "b45ea9a1-6c1f-4719-8432-2add7fdef96d" (UID: "b45ea9a1-6c1f-4719-8432-2add7fdef96d"). InnerVolumeSpecName "kube-api-access-zc9pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.576713 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.612465 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b45ea9a1-6c1f-4719-8432-2add7fdef96d" (UID: "b45ea9a1-6c1f-4719-8432-2add7fdef96d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.613590 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data" (OuterVolumeSpecName: "config-data") pod "b45ea9a1-6c1f-4719-8432-2add7fdef96d" (UID: "b45ea9a1-6c1f-4719-8432-2add7fdef96d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.622331 4979 generic.go:334] "Generic (PLEG): container finished" podID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerID="75ed2e4b32fb1961aa4410d1ed60d78ef4fdaa5313919f801c512171fa44ddd8" exitCode=0 Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.622414 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerDied","Data":"75ed2e4b32fb1961aa4410d1ed60d78ef4fdaa5313919f801c512171fa44ddd8"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.624092 4979 generic.go:334] "Generic (PLEG): container finished" podID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerID="5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c" exitCode=0 Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.624172 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.624200 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerDied","Data":"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.624283 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b45ea9a1-6c1f-4719-8432-2add7fdef96d","Type":"ContainerDied","Data":"cb1a59203ab85e4b8be1f22657c8e3ce137007d98b95b2249b478cc2e64ec70a"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.624320 4979 scope.go:117] "RemoveContainer" containerID="5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.625613 4979 generic.go:334] "Generic (PLEG): container finished" podID="4df90142-0487-4f26-8fb8-4ea21cda53d5" containerID="fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed" exitCode=0 Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.625654 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4df90142-0487-4f26-8fb8-4ea21cda53d5","Type":"ContainerDied","Data":"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.625682 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4df90142-0487-4f26-8fb8-4ea21cda53d5","Type":"ContainerDied","Data":"7b84177e95b1b0a6d39f6b6ff9de05d3f93d855b4f18c28d8844d6758839a5e7"} Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.625716 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.632104 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b45ea9a1-6c1f-4719-8432-2add7fdef96d" (UID: "b45ea9a1-6c1f-4719-8432-2add7fdef96d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.641509 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mcf2\" (UniqueName: \"kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2\") pod \"4df90142-0487-4f26-8fb8-4ea21cda53d5\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.641794 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle\") pod \"4df90142-0487-4f26-8fb8-4ea21cda53d5\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.641848 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data\") pod \"4df90142-0487-4f26-8fb8-4ea21cda53d5\" (UID: \"4df90142-0487-4f26-8fb8-4ea21cda53d5\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.642648 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/b45ea9a1-6c1f-4719-8432-2add7fdef96d-kube-api-access-zc9pb\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.642673 4979 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.642691 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.642706 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b45ea9a1-6c1f-4719-8432-2add7fdef96d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.661494 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2" (OuterVolumeSpecName: "kube-api-access-6mcf2") pod "4df90142-0487-4f26-8fb8-4ea21cda53d5" (UID: "4df90142-0487-4f26-8fb8-4ea21cda53d5"). InnerVolumeSpecName "kube-api-access-6mcf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.680305 4979 scope.go:117] "RemoveContainer" containerID="bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.688993 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data" (OuterVolumeSpecName: "config-data") pod "4df90142-0487-4f26-8fb8-4ea21cda53d5" (UID: "4df90142-0487-4f26-8fb8-4ea21cda53d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.691224 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4df90142-0487-4f26-8fb8-4ea21cda53d5" (UID: "4df90142-0487-4f26-8fb8-4ea21cda53d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.716268 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.728407 4979 scope.go:117] "RemoveContainer" containerID="5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c" Jan 30 22:05:52 crc kubenswrapper[4979]: E0130 22:05:52.729246 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c\": container with ID starting with 5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c not found: ID does not exist" containerID="5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.729487 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c"} err="failed to get container status \"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c\": rpc error: code = NotFound desc = could not find container \"5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c\": container with ID starting with 5f30460a78fb620878a260a133fe162f25b139b5312cc6c77862c78abae67c1c not found: ID does not exist" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.729620 4979 scope.go:117] "RemoveContainer" containerID="bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e" Jan 30 22:05:52 crc kubenswrapper[4979]: E0130 22:05:52.730532 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e\": container with ID starting with bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e not found: ID does not exist" containerID="bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.730690 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e"} err="failed to get container status \"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e\": rpc error: code = NotFound desc = could not find container \"bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e\": container with ID starting with bc4a13decbdb5cb304b671ef73bd0eaba666577f7ad0fbfda8d7bdc17ab8a83e not found: ID does not exist" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.730799 4979 scope.go:117] "RemoveContainer" containerID="fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.751551 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.751586 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df90142-0487-4f26-8fb8-4ea21cda53d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.751598 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mcf2\" (UniqueName: \"kubernetes.io/projected/4df90142-0487-4f26-8fb8-4ea21cda53d5-kube-api-access-6mcf2\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.765820 4979 scope.go:117] "RemoveContainer" containerID="fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed" Jan 30 22:05:52 crc kubenswrapper[4979]: E0130 22:05:52.767075 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed\": container with ID starting with fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed not found: ID does not exist" containerID="fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.767141 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed"} err="failed to get container status \"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed\": rpc error: code = NotFound desc = could not find container \"fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed\": container with ID starting with fce49f1ebb69f9b3e67dbbcadff1352212409ff215576dc39098885916a060ed not found: ID does not exist" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.852490 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.852631 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.852686 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vt8v\" (UniqueName: \"kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.852755 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.852951 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.853095 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data\") pod \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\" (UID: \"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b\") " Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.853560 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs" (OuterVolumeSpecName: "logs") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.853677 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.857061 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v" (OuterVolumeSpecName: "kube-api-access-7vt8v") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "kube-api-access-7vt8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.885868 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data" (OuterVolumeSpecName: "config-data") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.899576 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.917751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.917893 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" (UID: "9e1ddd52-3fb2-4fc2-aedd-cad0b550179b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.955548 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.955578 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.955592 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.955602 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vt8v\" (UniqueName: \"kubernetes.io/projected/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-kube-api-access-7vt8v\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.955611 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:05:52 crc kubenswrapper[4979]: I0130 22:05:52.976138 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.006162 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021024 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021746 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-api" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021775 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-api" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021797 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="registry-server" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021809 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="registry-server" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021825 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021833 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021846 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-log" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021854 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-log" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021880 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="extract-content" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021888 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="extract-content" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021900 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021910 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021947 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707c6502-cbf2-4d94-b032-6d6eeebb581e" containerName="nova-manage" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021955 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="707c6502-cbf2-4d94-b032-6d6eeebb581e" containerName="nova-manage" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021968 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df90142-0487-4f26-8fb8-4ea21cda53d5" containerName="nova-scheduler-scheduler" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.021976 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df90142-0487-4f26-8fb8-4ea21cda53d5" containerName="nova-scheduler-scheduler" Jan 30 22:05:53 crc kubenswrapper[4979]: E0130 22:05:53.021992 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="extract-utilities" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022017 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="extract-utilities" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022286 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-log" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022307 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" containerName="nova-api-api" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022318 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-log" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022335 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" containerName="nova-metadata-metadata" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022355 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="707c6502-cbf2-4d94-b032-6d6eeebb581e" containerName="nova-manage" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022368 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f682a99-2265-4234-a19c-01f62262e96b" containerName="registry-server" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.022403 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df90142-0487-4f26-8fb8-4ea21cda53d5" containerName="nova-scheduler-scheduler" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.023467 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.033582 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.034824 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.060409 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.084097 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4df90142-0487-4f26-8fb8-4ea21cda53d5" path="/var/lib/kubelet/pods/4df90142-0487-4f26-8fb8-4ea21cda53d5/volumes" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.084826 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45ea9a1-6c1f-4719-8432-2add7fdef96d" path="/var/lib/kubelet/pods/b45ea9a1-6c1f-4719-8432-2add7fdef96d/volumes" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.085890 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.085929 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.088253 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.093191 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.093443 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.095153 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.170920 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrzj\" (UniqueName: \"kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.171611 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.171892 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8x65\" (UniqueName: \"kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.174222 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.174401 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.174476 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.174728 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.174804 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.277458 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.277531 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.278108 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.279395 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.279498 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdrzj\" (UniqueName: \"kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.279658 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.279737 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8x65\" (UniqueName: \"kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.279803 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.280605 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.285639 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.285836 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.286892 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.286903 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.287957 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.297325 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdrzj\" (UniqueName: \"kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj\") pod \"nova-scheduler-0\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.300416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8x65\" (UniqueName: \"kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65\") pod \"nova-metadata-0\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.464567 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.486994 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.643136 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e1ddd52-3fb2-4fc2-aedd-cad0b550179b","Type":"ContainerDied","Data":"94fccc846accac2626b4330c74f1995d347342c1b98a558385ef9d93cbd0d6e8"} Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.643202 4979 scope.go:117] "RemoveContainer" containerID="75ed2e4b32fb1961aa4410d1ed60d78ef4fdaa5313919f801c512171fa44ddd8" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.643222 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.674924 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.703206 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.712081 4979 scope.go:117] "RemoveContainer" containerID="92f16fea6d07515ee136c5ba64aa266adb56de1f0255864e495e362a46f2f310" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.715988 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.717854 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.720718 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.720956 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.721051 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.735065 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.796964 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.797060 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.797423 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.797626 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.797809 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.798142 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txcpg\" (UniqueName: \"kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.900160 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.900241 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.900288 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.900356 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txcpg\" (UniqueName: \"kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.901284 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.901331 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.901837 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.905623 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.906754 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.906992 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.907321 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.920307 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txcpg\" (UniqueName: \"kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg\") pod \"nova-api-0\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " pod="openstack/nova-api-0" Jan 30 22:05:53 crc kubenswrapper[4979]: I0130 22:05:53.998727 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:05:54 crc kubenswrapper[4979]: W0130 22:05:54.001481 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf69eed38_4641_4703_8a87_93aedebfbff1.slice/crio-44999172f23ebc85109e86b0754fcca5c95fcb604e5236af4579e9ca3325bed8 WatchSource:0}: Error finding container 44999172f23ebc85109e86b0754fcca5c95fcb604e5236af4579e9ca3325bed8: Status 404 returned error can't find the container with id 44999172f23ebc85109e86b0754fcca5c95fcb604e5236af4579e9ca3325bed8 Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.045644 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:05:54 crc kubenswrapper[4979]: W0130 22:05:54.102433 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44df4390_d39d_42b7_904c_99d3e9680768.slice/crio-310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff WatchSource:0}: Error finding container 310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff: Status 404 returned error can't find the container with id 310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.104430 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:05:54 crc kubenswrapper[4979]: W0130 22:05:54.558913 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ae89cf4_f9f4_456b_947f_be87514b79ff.slice/crio-d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900 WatchSource:0}: Error finding container d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900: Status 404 returned error can't find the container with id d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900 Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.562681 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.671478 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerStarted","Data":"99f9e7602668b98789ff476044ada1b106a498ed44ed34ee5c2700adce022186"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.671545 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerStarted","Data":"2f2fbcbfa3fb8957bd22dbbdae0f118ed4065b8e1b28fd2310cab48fd875577d"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.671557 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerStarted","Data":"310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.674454 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f69eed38-4641-4703-8a87-93aedebfbff1","Type":"ContainerStarted","Data":"e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.674510 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f69eed38-4641-4703-8a87-93aedebfbff1","Type":"ContainerStarted","Data":"44999172f23ebc85109e86b0754fcca5c95fcb604e5236af4579e9ca3325bed8"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.676005 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerStarted","Data":"d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900"} Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.707457 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.707429953 podStartE2EDuration="2.707429953s" podCreationTimestamp="2026-01-30 22:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:54.691715322 +0000 UTC m=+1550.652962375" watchObservedRunningTime="2026-01-30 22:05:54.707429953 +0000 UTC m=+1550.668676996" Jan 30 22:05:54 crc kubenswrapper[4979]: I0130 22:05:54.725803 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.725778186 podStartE2EDuration="2.725778186s" podCreationTimestamp="2026-01-30 22:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:54.71589612 +0000 UTC m=+1550.677143153" watchObservedRunningTime="2026-01-30 22:05:54.725778186 +0000 UTC m=+1550.687025219" Jan 30 22:05:55 crc kubenswrapper[4979]: I0130 22:05:55.088955 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e1ddd52-3fb2-4fc2-aedd-cad0b550179b" path="/var/lib/kubelet/pods/9e1ddd52-3fb2-4fc2-aedd-cad0b550179b/volumes" Jan 30 22:05:55 crc kubenswrapper[4979]: I0130 22:05:55.724294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerStarted","Data":"5ac3c882827d52df05b6724629ccc459728f629242f9b9649899fbfb3897e504"} Jan 30 22:05:55 crc kubenswrapper[4979]: I0130 22:05:55.724397 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerStarted","Data":"748d1a4bd7c293d8968765b3b267f988706b6c7ba86f06948fccdfb30542ea96"} Jan 30 22:05:55 crc kubenswrapper[4979]: I0130 22:05:55.757319 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.757289552 podStartE2EDuration="2.757289552s" podCreationTimestamp="2026-01-30 22:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:05:55.752430502 +0000 UTC m=+1551.713677555" watchObservedRunningTime="2026-01-30 22:05:55.757289552 +0000 UTC m=+1551.718536585" Jan 30 22:05:58 crc kubenswrapper[4979]: I0130 22:05:58.464912 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 22:05:58 crc kubenswrapper[4979]: I0130 22:05:58.487994 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 22:05:58 crc kubenswrapper[4979]: I0130 22:05:58.488122 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 22:06:01 crc kubenswrapper[4979]: I0130 22:06:01.960411 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.039458 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.039613 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.039710 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.040803 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.040872 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" gracePeriod=600 Jan 30 22:06:02 crc kubenswrapper[4979]: E0130 22:06:02.168489 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.815952 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" exitCode=0 Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.816014 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c"} Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.816443 4979 scope.go:117] "RemoveContainer" containerID="9dd828028bd8f4b59424b93888d32e1ab8101a0db37322829e13e6a47a54aa2c" Jan 30 22:06:02 crc kubenswrapper[4979]: I0130 22:06:02.817416 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:06:02 crc kubenswrapper[4979]: E0130 22:06:02.817815 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:03 crc kubenswrapper[4979]: I0130 22:06:03.465155 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 22:06:03 crc kubenswrapper[4979]: I0130 22:06:03.488259 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 22:06:03 crc kubenswrapper[4979]: I0130 22:06:03.488326 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 22:06:03 crc kubenswrapper[4979]: I0130 22:06:03.503257 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 22:06:03 crc kubenswrapper[4979]: I0130 22:06:03.861820 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 22:06:04 crc kubenswrapper[4979]: I0130 22:06:04.046389 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:06:04 crc kubenswrapper[4979]: I0130 22:06:04.046459 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 22:06:04 crc kubenswrapper[4979]: I0130 22:06:04.511334 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:04 crc kubenswrapper[4979]: I0130 22:06:04.511353 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:05 crc kubenswrapper[4979]: I0130 22:06:05.065276 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:05 crc kubenswrapper[4979]: I0130 22:06:05.065408 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:13 crc kubenswrapper[4979]: I0130 22:06:13.070291 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:06:13 crc kubenswrapper[4979]: E0130 22:06:13.071602 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:13 crc kubenswrapper[4979]: I0130 22:06:13.494360 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 22:06:13 crc kubenswrapper[4979]: I0130 22:06:13.494462 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 22:06:13 crc kubenswrapper[4979]: I0130 22:06:13.500944 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 22:06:13 crc kubenswrapper[4979]: I0130 22:06:13.502338 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.058229 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.059016 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.059938 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.069448 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.941445 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 22:06:14 crc kubenswrapper[4979]: I0130 22:06:14.953130 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 22:06:26 crc kubenswrapper[4979]: I0130 22:06:26.070717 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:06:26 crc kubenswrapper[4979]: E0130 22:06:26.072203 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.395796 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.398501 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.403143 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.467417 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s998g\" (UniqueName: \"kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.467501 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.518588 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kkrz5"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.545454 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.545824 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" containerName="openstackclient" containerID="cri-o://6ccf84aaaded71906e123ab07138f1d46a5f8b45f0e088139ccd8642a91c4d8c" gracePeriod=2 Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.557950 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.570105 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s998g\" (UniqueName: \"kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.570194 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.571902 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.578109 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kkrz5"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.590324 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.660228 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s998g\" (UniqueName: \"kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g\") pod \"root-account-create-update-nxlz6\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.741182 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.777090 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:31 crc kubenswrapper[4979]: E0130 22:06:31.777695 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" containerName="openstackclient" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.777713 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" containerName="openstackclient" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.777973 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" containerName="openstackclient" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.793236 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.806743 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.843249 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.884172 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.885776 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.889942 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.923134 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.940754 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgphw\" (UniqueName: \"kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.946206 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:31 crc kubenswrapper[4979]: E0130 22:06:31.947987 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:31 crc kubenswrapper[4979]: E0130 22:06:31.948166 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data podName:981f1fee-4d2a-4d80-bf38-80557b6c5033 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:32.448140926 +0000 UTC m=+1588.409387959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data") pod "rabbitmq-cell1-server-0" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033") : configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:31 crc kubenswrapper[4979]: I0130 22:06:31.981102 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.049814 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhspj\" (UniqueName: \"kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.049872 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.049922 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgphw\" (UniqueName: \"kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.049963 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.051076 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.099218 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.101134 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.108167 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.133373 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-18a2-account-create-update-xznvc"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.139215 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-18a2-account-create-update-xznvc"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.152329 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.154192 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.157433 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhspj\" (UniqueName: \"kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.157485 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.157568 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.157748 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcnjv\" (UniqueName: \"kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.157912 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.157987 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:32.657963162 +0000 UTC m=+1588.619210195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-scripts" not found Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.159096 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.159932 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.159972 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:32.659962235 +0000 UTC m=+1588.621209268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-config" not found Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.163943 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgphw\" (UniqueName: \"kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw\") pod \"cinder-18a2-account-create-update-tgfqm\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.164009 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.172094 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.176086 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.186138 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d511-account-create-update-jtbft"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.194764 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d511-account-create-update-jtbft"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.224430 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.263971 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcnjv\" (UniqueName: \"kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.286884 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.287157 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfxkv\" (UniqueName: \"kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.287243 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.288825 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.362233 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhspj\" (UniqueName: \"kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj\") pod \"neutron-d511-account-create-update-gfm26\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.405244 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcnjv\" (UniqueName: \"kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv\") pod \"glance-b6e4-account-create-update-6c4qp\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.426966 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.427143 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfxkv\" (UniqueName: \"kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.429326 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.508319 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.526260 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b6e4-account-create-update-kc2rf"] Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.580189 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.580284 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data podName:981f1fee-4d2a-4d80-bf38-80557b6c5033 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:33.580256166 +0000 UTC m=+1589.541503189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data") pod "rabbitmq-cell1-server-0" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033") : configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.672654 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfxkv\" (UniqueName: \"kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv\") pod \"placement-0121-account-create-update-cjfbd\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.717863 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.718469 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:33.718446293 +0000 UTC m=+1589.679693326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-config" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.718913 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 30 22:06:32 crc kubenswrapper[4979]: E0130 22:06:32.718940 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:33.718932797 +0000 UTC m=+1589.680179830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-scripts" not found Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.720019 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.812416 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.863581 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.924968 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b6e4-account-create-update-kc2rf"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.991325 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:32 crc kubenswrapper[4979]: I0130 22:06:32.993067 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.012127 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.037986 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.038070 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data podName:e28a1e34-b97c-4090-adf8-fa3e2b766365 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:33.538048124 +0000 UTC m=+1589.499295157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data") pod "rabbitmq-server-0" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365") : configmap "rabbitmq-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.039711 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0121-account-create-update-k277d"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.141931 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlm9\" (UniqueName: \"kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.142565 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.222168 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206c6cff-9f21-42be-b4d9-ebab3cb4ead8" path="/var/lib/kubelet/pods/206c6cff-9f21-42be-b4d9-ebab3cb4ead8/volumes" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.251280 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a4cbbe-93e4-414e-9ca3-2cd182d6ed96" path="/var/lib/kubelet/pods/79a4cbbe-93e4-414e-9ca3-2cd182d6ed96/volumes" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.252253 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.252353 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlm9\" (UniqueName: \"kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.255925 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.256458 4979 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.256545 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts podName:aec2e945-509e-4cbb-9988-9f6cc840cd62 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:33.756521013 +0000 UTC m=+1589.717768046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts") pod "glance-default-internal-api-0" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62") : secret "glance-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.280389 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3a0116-2f4a-4dde-bf99-56759f4349bc" path="/var/lib/kubelet/pods/bc3a0116-2f4a-4dde-bf99-56759f4349bc/volumes" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.284125 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0187b79-63c8-4f13-af19-892e8c9b36f9" path="/var/lib/kubelet/pods/e0187b79-63c8-4f13-af19-892e8c9b36f9/volumes" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.286293 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0121-account-create-update-k277d"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.286331 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.311564 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.311624 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-016f-account-create-update-nh2b8"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.311834 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.314371 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="openstack-network-exporter" containerID="cri-o://9e984fe191fbb0e089fea2d7c4a853d2ee59f390e44ae404701bd08fbd0e1844" gracePeriod=300 Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.322989 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.363722 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dfzv\" (UniqueName: \"kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.364284 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.367651 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.367696 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.367710 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-nh2b8"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.367831 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.375902 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.377355 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="openstack-network-exporter" containerID="cri-o://ff005d24d962eb84bd10a56b66ec88ce9be0ba0641162443a679b4594c534402" gracePeriod=300 Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.396213 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.423107 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cj64f"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.451176 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cj64f"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.453132 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlm9\" (UniqueName: \"kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9\") pod \"nova-api-1082-account-create-update-vm4l4\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.468880 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.469010 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dfzv\" (UniqueName: \"kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.469054 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.469197 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.470076 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.539417 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.587442 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dfzv\" (UniqueName: \"kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv\") pod \"nova-cell0-504c-account-create-update-wjh5g\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.609631 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.610119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.611886 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.613084 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.613175 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data podName:e28a1e34-b97c-4090-adf8-fa3e2b766365 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.613150889 +0000 UTC m=+1590.574397922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data") pod "rabbitmq-server-0" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365") : configmap "rabbitmq-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.613456 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.613480 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data podName:981f1fee-4d2a-4d80-bf38-80557b6c5033 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.613472288 +0000 UTC m=+1591.574719321 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data") pod "rabbitmq-cell1-server-0" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033") : configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.624923 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.646505 4979 projected.go:194] Error preparing data for projected volume kube-api-access-kzdcz for pod openstack/nova-cell1-016f-account-create-update-nh2b8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.646601 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.146580138 +0000 UTC m=+1590.107827171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kzdcz" (UniqueName: "kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.685325 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1082-account-create-update-drkzw"] Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.716362 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.716444 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.216427128 +0000 UTC m=+1590.177674151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.762322 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.763086 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="openstack-network-exporter" containerID="cri-o://80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1" gracePeriod=30 Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.763213 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="ovn-northd" containerID="cri-o://e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" gracePeriod=30 Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.791159 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1082-account-create-update-drkzw"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.813438 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-m57kd"] Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.820508 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.820604 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.82058296 +0000 UTC m=+1591.781829993 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.821087 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.821116 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.821106155 +0000 UTC m=+1591.782353188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-config" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.824963 4979 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: E0130 22:06:33.825208 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts podName:aec2e945-509e-4cbb-9988-9f6cc840cd62 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.825180455 +0000 UTC m=+1590.786427488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts") pod "glance-default-internal-api-0" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62") : secret "glance-scripts" not found Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.844267 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.860387 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-brzlt"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.936098 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-m57kd"] Jan 30 22:06:33 crc kubenswrapper[4979]: I0130 22:06:33.964569 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="ovsdbserver-sb" containerID="cri-o://364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e" gracePeriod=300 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.073299 4979 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/neutron-ccc5789d5-9fbcz" secret="" err="secret \"neutron-neutron-dockercfg-cgj89\" not found" Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.150654 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-brzlt"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.157945 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.168143 4979 projected.go:194] Error preparing data for projected volume kube-api-access-kzdcz for pod openstack/nova-cell1-016f-account-create-update-nh2b8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.168573 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.168548764 +0000 UTC m=+1591.129795797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kzdcz" (UniqueName: "kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.194962 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="ovsdbserver-nb" containerID="cri-o://e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d" gracePeriod=300 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.260995 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-cf4cw"] Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.262885 4979 secret.go:188] Couldn't get secret openstack/neutron-config: secret "neutron-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.262962 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.762943464 +0000 UTC m=+1590.724190497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.265367 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.265453 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.265431291 +0000 UTC m=+1591.226678324 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.265513 4979 secret.go:188] Couldn't get secret openstack/neutron-httpd-config: secret "neutron-httpd-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.265541 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:34.765530863 +0000 UTC m=+1590.726777896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "httpd-config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-httpd-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.275706 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nxlz6" event={"ID":"2ae1b557-b27a-4331-8c91-bb1934e91fce","Type":"ContainerStarted","Data":"b83c4ed8bbda19ed5aa54ca0fc84bb29d05f7a78681b54738255e43bd19127ba"} Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.281438 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282169 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-server" containerID="cri-o://9ebde5265edc1759790d3676946d4106e58a2899f6ca92dff07d39b2c655de8d" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282673 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="swift-recon-cron" containerID="cri-o://453f3cdac4ea155af06a1a316c55ca43062a6082a47aacfa7561eb05a7b482b3" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282734 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="rsync" containerID="cri-o://91cb53bd2b951f74cd0d66aa9f24d08e3c7022176624a9c9ffd768ceb393e191" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282773 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-expirer" containerID="cri-o://7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282816 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-updater" containerID="cri-o://a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282884 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-server" containerID="cri-o://20e0cc7660bd336e138f9bda2b90b0037324c98e23852b050c094fc3ec2b9759" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282965 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-reaper" containerID="cri-o://1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.282915 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-replicator" containerID="cri-o://b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283068 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-auditor" containerID="cri-o://42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283124 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-replicator" containerID="cri-o://1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283386 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-server" containerID="cri-o://34b69c813947c1a15abad9192e8f1cfc7295fd0dfaea4369b35dee2f2f213420" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283457 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-auditor" containerID="cri-o://c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283513 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-replicator" containerID="cri-o://7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283592 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-updater" containerID="cri-o://fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.283669 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-auditor" containerID="cri-o://77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.298552 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e16537b0-b66e-4bad-a481-9d2755cf6eb5/ovsdbserver-sb/0.log" Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.298634 4979 generic.go:334] "Generic (PLEG): container finished" podID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerID="ff005d24d962eb84bd10a56b66ec88ce9be0ba0641162443a679b4594c534402" exitCode=2 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.298662 4979 generic.go:334] "Generic (PLEG): container finished" podID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerID="364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e" exitCode=143 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.299897 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerDied","Data":"ff005d24d962eb84bd10a56b66ec88ce9be0ba0641162443a679b4594c534402"} Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.299984 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerDied","Data":"364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e"} Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.340644 4979 generic.go:334] "Generic (PLEG): container finished" podID="82508003-60c8-463b-92a9-bc9521fcfa03" containerID="6ccf84aaaded71906e123ab07138f1d46a5f8b45f0e088139ccd8642a91c4d8c" exitCode=137 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.341085 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-cf4cw"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.371134 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9zrqq"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.381814 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qjfmb"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.382722 4979 generic.go:334] "Generic (PLEG): container finished" podID="e8a49e0c-0043-4326-b478-981d19e6480b" containerID="9e984fe191fbb0e089fea2d7c4a853d2ee59f390e44ae404701bd08fbd0e1844" exitCode=2 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.382784 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerDied","Data":"9e984fe191fbb0e089fea2d7c4a853d2ee59f390e44ae404701bd08fbd0e1844"} Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.402482 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9zrqq"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.425239 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qjfmb"] Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.454518 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:34 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: if [ -n "cinder" ]; then Jan 30 22:06:34 crc kubenswrapper[4979]: GRANT_DATABASE="cinder" Jan 30 22:06:34 crc kubenswrapper[4979]: else Jan 30 22:06:34 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:34 crc kubenswrapper[4979]: fi Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:34 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:34 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:34 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:34 crc kubenswrapper[4979]: # support updates Jan 30 22:06:34 crc kubenswrapper[4979]: Jan 30 22:06:34 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.456797 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-18a2-account-create-update-tgfqm" podUID="d4fc1eef-47e7-4fdd-9642-da7ce95056e8" Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.460250 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-s58pz"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.523633 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-s58pz"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.592952 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.608261 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.631994 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-qf69d"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.669484 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-qf69d"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.681270 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.681607 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-lz8zj" podUID="817d8847-f022-4837-834f-a0e4b124f7ea" containerName="openstack-network-exporter" containerID="cri-o://bb8bcac19d63070cb472f5498c791e719cc957cf60e16d8441a9b6a9f88dbeff" gracePeriod=30 Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.703864 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.703951 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data podName:e28a1e34-b97c-4090-adf8-fa3e2b766365 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:36.70392893 +0000 UTC m=+1592.665175953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data") pod "rabbitmq-server-0" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365") : configmap "rabbitmq-config-data" not found Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.704897 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.705177 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="dnsmasq-dns" containerID="cri-o://68738a2810356039fe36b036d04e6e47dff0836ae08b737f9907c8607fb78312" gracePeriod=10 Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.728606 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2zdqm"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.740248 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pqfg4"] Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.785928 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pqfg4"] Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.805941 4979 secret.go:188] Couldn't get secret openstack/neutron-config: secret "neutron-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.806022 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.806004557 +0000 UTC m=+1591.767251590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.812195 4979 secret.go:188] Couldn't get secret openstack/neutron-httpd-config: secret "neutron-httpd-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.823339 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:35.823291012 +0000 UTC m=+1591.784538035 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "httpd-config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-httpd-config" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.897506 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode16537b0_b66e_4bad_a481_9d2755cf6eb5.slice/crio-conmon-364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82508003_60c8_463b_92a9_bc9521fcfa03.slice/crio-conmon-6ccf84aaaded71906e123ab07138f1d46a5f8b45f0e088139ccd8642a91c4d8c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8a49e0c_0043_4326_b478_981d19e6480b.slice/crio-conmon-e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7cc7cf6_3592_4e25_9578_27ae56d6909b.slice/crio-conmon-80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3258ad4a_d940_41c3_b875_afadfcc317d4.slice/crio-conmon-1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:06:34 crc kubenswrapper[4979]: I0130 22:06:34.906126 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2zdqm"] Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.908571 4979 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 30 22:06:34 crc kubenswrapper[4979]: E0130 22:06:34.908643 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts podName:aec2e945-509e-4cbb-9988-9f6cc840cd62 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:36.908624309 +0000 UTC m=+1592.869871342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts") pod "glance-default-internal-api-0" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62") : secret "glance-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:34.995201 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.066117 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-95kjb"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.238085 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023efd8e-7f0d-4ac5-80b3-db30dbb25905" path="/var/lib/kubelet/pods/023efd8e-7f0d-4ac5-80b3-db30dbb25905/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.248441 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.274448 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.274624 4979 projected.go:194] Error preparing data for projected volume kube-api-access-kzdcz for pod openstack/nova-cell1-016f-account-create-update-nh2b8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.275239 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:37.275204172 +0000 UTC m=+1593.236451205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kzdcz" (UniqueName: "kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.249005 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e523da-837e-4af0-835b-55b1950fc487" path="/var/lib/kubelet/pods/15e523da-837e-4af0-835b-55b1950fc487/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.284408 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="rabbitmq" containerID="cri-o://32737030f36aec701cd5a18ee26db33f1920b61eff0e7b5c5143eb68b64ad2a2" gracePeriod=604800 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.375879 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c6531f-d97f-4f39-95bd-4c2b8a75779f" path="/var/lib/kubelet/pods/29c6531f-d97f-4f39-95bd-4c2b8a75779f/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.376440 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret\") pod \"82508003-60c8-463b-92a9-bc9521fcfa03\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.379458 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config\") pod \"82508003-60c8-463b-92a9-bc9521fcfa03\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.379600 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t85tp\" (UniqueName: \"kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp\") pod \"82508003-60c8-463b-92a9-bc9521fcfa03\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.379725 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle\") pod \"82508003-60c8-463b-92a9-bc9521fcfa03\" (UID: \"82508003-60c8-463b-92a9-bc9521fcfa03\") " Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.382459 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.382605 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:37.382565751 +0000 UTC m=+1593.343812774 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.392620 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="707c6502-cbf2-4d94-b032-6d6eeebb581e" path="/var/lib/kubelet/pods/707c6502-cbf2-4d94-b032-6d6eeebb581e/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.397452 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79723cfd-4e3c-446c-bdf1-5c2c997950a8" path="/var/lib/kubelet/pods/79723cfd-4e3c-446c-bdf1-5c2c997950a8/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.399307 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp" (OuterVolumeSpecName: "kube-api-access-t85tp") pod "82508003-60c8-463b-92a9-bc9521fcfa03" (UID: "82508003-60c8-463b-92a9-bc9521fcfa03"). InnerVolumeSpecName "kube-api-access-t85tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.399562 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80aa258c-fc1b-4379-8b50-ac89cb9b4568" path="/var/lib/kubelet/pods/80aa258c-fc1b-4379-8b50-ac89cb9b4568/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.403327 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8481722d-b63c-4f8e-82e2-0960d719b46b" path="/var/lib/kubelet/pods/8481722d-b63c-4f8e-82e2-0960d719b46b/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.408636 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c59f1f7-caf7-4ab4-b405-dbf27330ff37" path="/var/lib/kubelet/pods/9c59f1f7-caf7-4ab4-b405-dbf27330ff37/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.422455 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abec2c46-a984-4314-88c5-d50d20ef7f8d" path="/var/lib/kubelet/pods/abec2c46-a984-4314-88c5-d50d20ef7f8d/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.426253 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb76b95-4c2d-478d-b9d9-e6e182859ccd" path="/var/lib/kubelet/pods/adb76b95-4c2d-478d-b9d9-e6e182859ccd/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.428610 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd648327-e40d-4f17-9366-1773fa95f47a" path="/var/lib/kubelet/pods/bd648327-e40d-4f17-9366-1773fa95f47a/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.435206 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3dfb7c0-8bfc-47f8-bd7d-11fa49469326" path="/var/lib/kubelet/pods/e3dfb7c0-8bfc-47f8-bd7d-11fa49469326/volumes" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.437300 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:35 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: if [ -n "cinder" ]; then Jan 30 22:06:35 crc kubenswrapper[4979]: GRANT_DATABASE="cinder" Jan 30 22:06:35 crc kubenswrapper[4979]: else Jan 30 22:06:35 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:35 crc kubenswrapper[4979]: fi Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:35 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:35 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:35 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:35 crc kubenswrapper[4979]: # support updates Jan 30 22:06:35 crc kubenswrapper[4979]: Jan 30 22:06:35 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.438497 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-18a2-account-create-update-tgfqm" podUID="d4fc1eef-47e7-4fdd-9642-da7ce95056e8" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.442129 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.447933 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-tgfqm" event={"ID":"d4fc1eef-47e7-4fdd-9642-da7ce95056e8","Type":"ContainerStarted","Data":"d9676cb7e0eb5ddecab92aeb166656644b6133c3cd8ff91f6626cb611a3b2256"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448049 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-95kjb"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448076 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448093 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448651 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5880-account-create-update-nvk6p"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448677 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.448695 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5880-account-create-update-nvk6p"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.449136 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5574d874bd-cg256" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-log" containerID="cri-o://4bff6c93d10ae5d79c2f86866faa569249ca91ad63e93e5aed7ec9e5c7ae69e3" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.449413 4979 scope.go:117] "RemoveContainer" containerID="6ccf84aaaded71906e123ab07138f1d46a5f8b45f0e088139ccd8642a91c4d8c" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.449742 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-ccc5789d5-9fbcz" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-api" containerID="cri-o://94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.449960 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-log" containerID="cri-o://2764ceb6c35ea2f48a0d751046545351bbcae998483bb75989d6728581aa19d8" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.450057 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-ccc5789d5-9fbcz" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-httpd" containerID="cri-o://cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.450135 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5574d874bd-cg256" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-api" containerID="cri-o://db8279f109bd17f628e44659d3d7f1d466d6bb9b71489014bb4d28dd40cb2a62" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.450447 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-httpd" containerID="cri-o://aa559b1135f6618404d0e60d9a772fc66e419ae78eeefe9bc432ad7bad847635" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.485970 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lz8zj_817d8847-f022-4837-834f-a0e4b124f7ea/openstack-network-exporter/0.log" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.486112 4979 generic.go:334] "Generic (PLEG): container finished" podID="817d8847-f022-4837-834f-a0e4b124f7ea" containerID="bb8bcac19d63070cb472f5498c791e719cc957cf60e16d8441a9b6a9f88dbeff" exitCode=2 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.487455 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lz8zj" event={"ID":"817d8847-f022-4837-834f-a0e4b124f7ea","Type":"ContainerDied","Data":"bb8bcac19d63070cb472f5498c791e719cc957cf60e16d8441a9b6a9f88dbeff"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.490386 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t85tp\" (UniqueName: \"kubernetes.io/projected/82508003-60c8-463b-92a9-bc9521fcfa03-kube-api-access-t85tp\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.513296 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.514094 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="cinder-scheduler" containerID="cri-o://3c9f500d96b7f2b3e97c54f28c77ed3aa52150d439c4b7859470421455c33714" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.514815 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="probe" containerID="cri-o://998a3106aba2ac42665d88c13615a533640da17728cf5d2d8129a1a9548dfb1e" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.517813 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e16537b0-b66e-4bad-a481-9d2755cf6eb5/ovsdbserver-sb/0.log" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.517917 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.530345 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82508003-60c8-463b-92a9-bc9521fcfa03" (UID: "82508003-60c8-463b-92a9-bc9521fcfa03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.530721 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.531429 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api-log" containerID="cri-o://70c9e4b75f4b6026504bbe59f295f79a6dc13bad465ac3a98878072f04debbd7" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.531679 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api" containerID="cri-o://33be242a70bfcf61aafc753268bb59c2e8a2a55bfc2666cef9e675491b558cd9" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.551316 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e8a49e0c-0043-4326-b478-981d19e6480b/ovsdbserver-nb/0.log" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.551464 4979 generic.go:334] "Generic (PLEG): container finished" podID="e8a49e0c-0043-4326-b478-981d19e6480b" containerID="e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d" exitCode=143 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.551954 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.551997 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerDied","Data":"e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.560091 4979 generic.go:334] "Generic (PLEG): container finished" podID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerID="8e3dce5a3229b4152f9145f314182cfb310de1a43da227935ba4d0e27f26cb66" exitCode=1 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.560186 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nxlz6" event={"ID":"2ae1b557-b27a-4331-8c91-bb1934e91fce","Type":"ContainerDied","Data":"8e3dce5a3229b4152f9145f314182cfb310de1a43da227935ba4d0e27f26cb66"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.565638 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mvqgx"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.568340 4979 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-nxlz6" secret="" err="secret \"galera-openstack-cell1-dockercfg-wj9ck\" not found" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.568429 4979 scope.go:117] "RemoveContainer" containerID="8e3dce5a3229b4152f9145f314182cfb310de1a43da227935ba4d0e27f26cb66" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.580153 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mvqgx"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.591652 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.591725 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.591766 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.591828 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.591968 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lsh6\" (UniqueName: \"kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.592066 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.592152 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.592253 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir\") pod \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\" (UID: \"e16537b0-b66e-4bad-a481-9d2755cf6eb5\") " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.594933 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.597237 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.597521 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener-log" containerID="cri-o://0a36922f832fee9028934a3bf94046644f1757e67d16e088681eff93cf07c0b1" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.597627 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener" containerID="cri-o://1e3a41213e0b64183674077174838e4b857951ec8d86a2d97f557ed86825981e" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.599813 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts" (OuterVolumeSpecName: "scripts") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.600170 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config" (OuterVolumeSpecName: "config") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.603349 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.616002 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gds8v"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.618219 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="91cb53bd2b951f74cd0d66aa9f24d08e3c7022176624a9c9ffd768ceb393e191" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.618257 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619208 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619237 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619247 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619258 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="34b69c813947c1a15abad9192e8f1cfc7295fd0dfaea4369b35dee2f2f213420" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619265 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619282 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619289 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619295 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="20e0cc7660bd336e138f9bda2b90b0037324c98e23852b050c094fc3ec2b9759" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619301 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619308 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619315 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619322 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="9ebde5265edc1759790d3676946d4106e58a2899f6ca92dff07d39b2c655de8d" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619369 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"91cb53bd2b951f74cd0d66aa9f24d08e3c7022176624a9c9ffd768ceb393e191"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619494 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619540 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619558 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619571 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619583 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"34b69c813947c1a15abad9192e8f1cfc7295fd0dfaea4369b35dee2f2f213420"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619622 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619637 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619649 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619663 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"20e0cc7660bd336e138f9bda2b90b0037324c98e23852b050c094fc3ec2b9759"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619697 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619714 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.619738 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"9ebde5265edc1759790d3676946d4106e58a2899f6ca92dff07d39b2c655de8d"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.629541 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6" (OuterVolumeSpecName: "kube-api-access-5lsh6") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "kube-api-access-5lsh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.634718 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.635132 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker-log" containerID="cri-o://d775e4bedb5dba7162d0b89985eadfea2585c2425816a98d45bf2a5aee52a9dc" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.635365 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker" containerID="cri-o://9d8dfa3f28e549253bc3c74adc2593d512df4a8ba19da4e9daca2c7d742b4a42" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.639085 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "82508003-60c8-463b-92a9-bc9521fcfa03" (UID: "82508003-60c8-463b-92a9-bc9521fcfa03"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.639587 4979 generic.go:334] "Generic (PLEG): container finished" podID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerID="80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1" exitCode=2 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.639973 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerDied","Data":"80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.647443 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gds8v"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.656100 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.662443 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "82508003-60c8-463b-92a9-bc9521fcfa03" (UID: "82508003-60c8-463b-92a9-bc9521fcfa03"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.664171 4979 generic.go:334] "Generic (PLEG): container finished" podID="4bae0355-ad11-48d3-a13f-378354677f77" containerID="68738a2810356039fe36b036d04e6e47dff0836ae08b737f9907c8607fb78312" exitCode=0 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.664247 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" event={"ID":"4bae0355-ad11-48d3-a13f-378354677f77","Type":"ContainerDied","Data":"68738a2810356039fe36b036d04e6e47dff0836ae08b737f9907c8607fb78312"} Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.686143 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.686281 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-svtcv"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.698395 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.709789 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.709897 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data podName:981f1fee-4d2a-4d80-bf38-80557b6c5033 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:39.709873419 +0000 UTC m=+1595.671120452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data") pod "rabbitmq-cell1-server-0" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033") : configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.709970 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lsh6\" (UniqueName: \"kubernetes.io/projected/e16537b0-b66e-4bad-a481-9d2755cf6eb5-kube-api-access-5lsh6\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710011 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710022 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710049 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710058 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710068 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82508003-60c8-463b-92a9-bc9521fcfa03-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710089 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710099 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e16537b0-b66e-4bad-a481-9d2755cf6eb5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.710353 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.710450 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts podName:2ae1b557-b27a-4331-8c91-bb1934e91fce nodeName:}" failed. No retries permitted until 2026-01-30 22:06:36.210420723 +0000 UTC m=+1592.171667926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts") pod "root-account-create-update-nxlz6" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.710746 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.711243 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-log" containerID="cri-o://3a0f2c5f20fe7df83f657bd57b9e6599013ae4fe90547daa544d3812ba096c45" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.711846 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-httpd" containerID="cri-o://10bc5c2d6026fb9b6e38741866768cd6cce92452ca56fb4384be71b3bffc65c0" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.737287 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-svtcv"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.800410 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.800895 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cd6984846-6pk8x" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api-log" containerID="cri-o://edcc79875734fdba9dd8e28171366d93b289c592ed8ec92b3fba51d021505e99" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.801135 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cd6984846-6pk8x" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api" containerID="cri-o://b87dfaf39281615f48403ce307bb51ad9f7df21ce90a59879ea17a4270453139" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.819017 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.819801 4979 secret.go:188] Couldn't get secret openstack/neutron-config: secret "neutron-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.819876 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:37.819851348 +0000 UTC m=+1593.781098571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.820898 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.828319 4979 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 30 22:06:35 crc kubenswrapper[4979]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 22:06:35 crc kubenswrapper[4979]: + source /usr/local/bin/container-scripts/functions Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNBridge=br-int Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNRemote=tcp:localhost:6642 Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNEncapType=geneve Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNAvailabilityZones= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ EnableChassisAsGateway=true Jan 30 22:06:35 crc kubenswrapper[4979]: ++ PhysicalNetworks= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNHostName= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 22:06:35 crc kubenswrapper[4979]: ++ ovs_dir=/var/lib/openvswitch Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 22:06:35 crc kubenswrapper[4979]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + cleanup_ovsdb_server_semaphore Jan 30 22:06:35 crc kubenswrapper[4979]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 22:06:35 crc kubenswrapper[4979]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-tmjt2" message=< Jan 30 22:06:35 crc kubenswrapper[4979]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 22:06:35 crc kubenswrapper[4979]: + source /usr/local/bin/container-scripts/functions Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNBridge=br-int Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNRemote=tcp:localhost:6642 Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNEncapType=geneve Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNAvailabilityZones= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ EnableChassisAsGateway=true Jan 30 22:06:35 crc kubenswrapper[4979]: ++ PhysicalNetworks= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNHostName= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 22:06:35 crc kubenswrapper[4979]: ++ ovs_dir=/var/lib/openvswitch Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 22:06:35 crc kubenswrapper[4979]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + cleanup_ovsdb_server_semaphore Jan 30 22:06:35 crc kubenswrapper[4979]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 22:06:35 crc kubenswrapper[4979]: > Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.828376 4979 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 30 22:06:35 crc kubenswrapper[4979]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 22:06:35 crc kubenswrapper[4979]: + source /usr/local/bin/container-scripts/functions Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNBridge=br-int Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNRemote=tcp:localhost:6642 Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNEncapType=geneve Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNAvailabilityZones= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ EnableChassisAsGateway=true Jan 30 22:06:35 crc kubenswrapper[4979]: ++ PhysicalNetworks= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ OVNHostName= Jan 30 22:06:35 crc kubenswrapper[4979]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 22:06:35 crc kubenswrapper[4979]: ++ ovs_dir=/var/lib/openvswitch Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 22:06:35 crc kubenswrapper[4979]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 22:06:35 crc kubenswrapper[4979]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + sleep 0.5 Jan 30 22:06:35 crc kubenswrapper[4979]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 22:06:35 crc kubenswrapper[4979]: + cleanup_ovsdb_server_semaphore Jan 30 22:06:35 crc kubenswrapper[4979]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 22:06:35 crc kubenswrapper[4979]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 22:06:35 crc kubenswrapper[4979]: > pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" containerID="cri-o://2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.828430 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" containerID="cri-o://2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" gracePeriod=29 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.828588 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.828833 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" containerID="cri-o://2f2fbcbfa3fb8957bd22dbbdae0f118ed4065b8e1b28fd2310cab48fd875577d" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.828925 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" containerID="cri-o://99f9e7602668b98789ff476044ada1b106a498ed44ed34ee5c2700adce022186" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.895765 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.896379 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-log" containerID="cri-o://748d1a4bd7c293d8968765b3b267f988706b6c7ba86f06948fccdfb30542ea96" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.896506 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.896681 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-api" containerID="cri-o://5ac3c882827d52df05b6724629ccc459728f629242f9b9649899fbfb3897e504" gracePeriod=30 Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.923337 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.923616 4979 secret.go:188] Couldn't get secret openstack/neutron-httpd-config: secret "neutron-httpd-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.923414 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.923492 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.923841 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:39.923645021 +0000 UTC m=+1595.884892224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-scripts" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.924278 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.924416 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:37.924400701 +0000 UTC m=+1593.885647904 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "httpd-config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-httpd-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: E0130 22:06:35.924521 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:39.924506964 +0000 UTC m=+1595.885754147 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-config" not found Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.949423 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e16537b0-b66e-4bad-a481-9d2755cf6eb5" (UID: "e16537b0-b66e-4bad-a481-9d2755cf6eb5"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.966751 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:35 crc kubenswrapper[4979]: I0130 22:06:35.985450 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.002648 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" containerID="cri-o://ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" gracePeriod=29 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.008303 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gfv78"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.029338 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e16537b0-b66e-4bad-a481-9d2755cf6eb5-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.029383 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-krqxx"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.061295 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qr8n5"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.134854 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e8a49e0c-0043-4326-b478-981d19e6480b/ovsdbserver-nb/0.log" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.134964 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.179126 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gfv78"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.234869 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244395 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244541 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7hdz\" (UniqueName: \"kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244572 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244645 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244709 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244734 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.244758 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts\") pod \"e8a49e0c-0043-4326-b478-981d19e6480b\" (UID: \"e8a49e0c-0043-4326-b478-981d19e6480b\") " Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.245959 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.247208 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts podName:2ae1b557-b27a-4331-8c91-bb1934e91fce nodeName:}" failed. No retries permitted until 2026-01-30 22:06:37.247148676 +0000 UTC m=+1593.208395719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts") pod "root-account-create-update-nxlz6" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.252308 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config" (OuterVolumeSpecName: "config") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.256117 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.261706 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts" (OuterVolumeSpecName: "scripts") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.276852 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.293481 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qr8n5"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.295557 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.300050 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz" (OuterVolumeSpecName: "kube-api-access-r7hdz") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "kube-api-access-r7hdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.316396 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-krqxx"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.331804 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.332166 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" containerID="cri-o://d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" gracePeriod=30 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.380928 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7hdz\" (UniqueName: \"kubernetes.io/projected/e8a49e0c-0043-4326-b478-981d19e6480b-kube-api-access-r7hdz\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.380975 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.380998 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.381010 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a49e0c-0043-4326-b478-981d19e6480b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.384190 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.392137 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fgz9b"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.443175 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.443614 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="c04339fa-9eb7-4671-895b-ef768888add0" containerName="nova-cell0-conductor-conductor" containerID="cri-o://383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" gracePeriod=30 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.454658 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lz8zj_817d8847-f022-4837-834f-a0e4b124f7ea/openstack-network-exporter/0.log" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.454789 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.471360 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cbmzn"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.487434 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.528665 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.529314 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="95748319-965e-49d8-8a00-c0bc1025337d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b" gracePeriod=30 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.546451 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.563207 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fgz9b"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.614455 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="galera" containerID="cri-o://b8dd50aa90c7ce48431a68126a4e4bcee3261b44260cf48698bd70f7bf026dc4" gracePeriod=30 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.617102 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.617312 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.617488 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.617595 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72jjl\" (UniqueName: \"kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618001 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618069 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618093 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618186 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618682 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.637212 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsbsj\" (UniqueName: \"kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.637286 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs\") pod \"817d8847-f022-4837-834f-a0e4b124f7ea\" (UID: \"817d8847-f022-4837-834f-a0e4b124f7ea\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.637326 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config\") pod \"4bae0355-ad11-48d3-a13f-378354677f77\" (UID: \"4bae0355-ad11-48d3-a13f-378354677f77\") " Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.618257 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.620801 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.632326 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.639302 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config" (OuterVolumeSpecName: "config") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.646850 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.646893 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817d8847-f022-4837-834f-a0e4b124f7ea-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.646910 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.646925 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.646934 4979 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/817d8847-f022-4837-834f-a0e4b124f7ea-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.677861 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj" (OuterVolumeSpecName: "kube-api-access-bsbsj") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "kube-api-access-bsbsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.697962 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cbmzn"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.703156 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl" (OuterVolumeSpecName: "kube-api-access-72jjl") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "kube-api-access-72jjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.717697 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.729067 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.742589 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.743497 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.743523 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.744059 4979 generic.go:334] "Generic (PLEG): container finished" podID="94177def-b41a-4af1-bcce-a0673da9f81c" containerID="0a36922f832fee9028934a3bf94046644f1757e67d16e088681eff93cf07c0b1" exitCode=143 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.744140 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerDied","Data":"0a36922f832fee9028934a3bf94046644f1757e67d16e088681eff93cf07c0b1"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.755477 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsbsj\" (UniqueName: \"kubernetes.io/projected/817d8847-f022-4837-834f-a0e4b124f7ea-kube-api-access-bsbsj\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.755510 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72jjl\" (UniqueName: \"kubernetes.io/projected/4bae0355-ad11-48d3-a13f-378354677f77-kube-api-access-72jjl\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.755585 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.755644 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data podName:e28a1e34-b97c-4090-adf8-fa3e2b766365 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:40.755624889 +0000 UTC m=+1596.716871922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data") pod "rabbitmq-server-0" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365") : configmap "rabbitmq-config-data" not found Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.757548 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jjtrg"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.781430 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.781786 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.788322 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.801257 4979 generic.go:334] "Generic (PLEG): container finished" podID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerID="4bff6c93d10ae5d79c2f86866faa569249ca91ad63e93e5aed7ec9e5c7ae69e3" exitCode=143 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.801407 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerDied","Data":"4bff6c93d10ae5d79c2f86866faa569249ca91ad63e93e5aed7ec9e5c7ae69e3"} Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.806603 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.806689 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.810353 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e8a49e0c-0043-4326-b478-981d19e6480b/ovsdbserver-nb/0.log" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.810535 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e8a49e0c-0043-4326-b478-981d19e6480b","Type":"ContainerDied","Data":"1ba7eb4e73d21b76aae2c54799684c5d1a7e13a849894846bd2ade424074662c"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.810604 4979 scope.go:117] "RemoveContainer" containerID="9e984fe191fbb0e089fea2d7c4a853d2ee59f390e44ae404701bd08fbd0e1844" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.810980 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.824693 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jjtrg"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.829137 4979 generic.go:334] "Generic (PLEG): container finished" podID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerID="d775e4bedb5dba7162d0b89985eadfea2585c2425816a98d45bf2a5aee52a9dc" exitCode=143 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.829227 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerDied","Data":"d775e4bedb5dba7162d0b89985eadfea2585c2425816a98d45bf2a5aee52a9dc"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.839670 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-nh2b8"] Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.840865 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kzdcz], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/nova-cell1-016f-account-create-update-nh2b8" podUID="f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.847850 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.848859 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.874581 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.894506 4979 generic.go:334] "Generic (PLEG): container finished" podID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerID="998a3106aba2ac42665d88c13615a533640da17728cf5d2d8129a1a9548dfb1e" exitCode=0 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.894589 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerDied","Data":"998a3106aba2ac42665d88c13615a533640da17728cf5d2d8129a1a9548dfb1e"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.895810 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.915412 4979 generic.go:334] "Generic (PLEG): container finished" podID="b0baa205-eff4-4cad-a27f-db3599bba092" containerID="2764ceb6c35ea2f48a0d751046545351bbcae998483bb75989d6728581aa19d8" exitCode=143 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.915805 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerDied","Data":"2764ceb6c35ea2f48a0d751046545351bbcae998483bb75989d6728581aa19d8"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.931137 4979 generic.go:334] "Generic (PLEG): container finished" podID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerID="748d1a4bd7c293d8968765b3b267f988706b6c7ba86f06948fccdfb30542ea96" exitCode=143 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.931484 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerDied","Data":"748d1a4bd7c293d8968765b3b267f988706b6c7ba86f06948fccdfb30542ea96"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.934604 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" event={"ID":"4bae0355-ad11-48d3-a13f-378354677f77","Type":"ContainerDied","Data":"fcd7f766ab345ea2e8c0ac6bd8fb4c89c2192ee2d80ef64d952c822915831fd5"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.935108 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-kdhtr" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.945318 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.952473 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.953247 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" containerName="nova-scheduler-scheduler" containerID="cri-o://e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" gracePeriod=30 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.969745 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="rabbitmq" containerID="cri-o://eb730deff98069b37c5aef76211404c3781f41d8e0443df163b818199c423131" gracePeriod=604800 Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.971256 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e16537b0-b66e-4bad-a481-9d2755cf6eb5/ovsdbserver-sb/0.log" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.971347 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e16537b0-b66e-4bad-a481-9d2755cf6eb5","Type":"ContainerDied","Data":"6de0f04b65ae33fad502fd47c75940202442c98e117caa698fb7adad6b0870b8"} Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.971470 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.990269 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.990583 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.998341 4979 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 30 22:06:36 crc kubenswrapper[4979]: E0130 22:06:36.998408 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts podName:aec2e945-509e-4cbb-9988-9f6cc840cd62 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:40.99838849 +0000 UTC m=+1596.959635523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts") pod "glance-default-internal-api-0" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62") : secret "glance-scripts" not found Jan 30 22:06:36 crc kubenswrapper[4979]: I0130 22:06:36.999825 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.008385 4979 generic.go:334] "Generic (PLEG): container finished" podID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerID="3a0f2c5f20fe7df83f657bd57b9e6599013ae4fe90547daa544d3812ba096c45" exitCode=143 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.008514 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerDied","Data":"3a0f2c5f20fe7df83f657bd57b9e6599013ae4fe90547daa544d3812ba096c45"} Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.022095 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "glance" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="glance" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.030148 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-b6e4-account-create-update-6c4qp" podUID="fe035ddd-73a5-43fd-8b1d-343447e1f850" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.031879 4979 generic.go:334] "Generic (PLEG): container finished" podID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerID="cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260" exitCode=0 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.034632 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerDied","Data":"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.062854 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "e8a49e0c-0043-4326-b478-981d19e6480b" (UID: "e8a49e0c-0043-4326-b478-981d19e6480b"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.064751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config" (OuterVolumeSpecName: "config") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.078166 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerID="edcc79875734fdba9dd8e28171366d93b289c592ed8ec92b3fba51d021505e99" exitCode=143 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.104529 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a49e0c-0043-4326-b478-981d19e6480b-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.104571 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.104581 4979 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.105970 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.163005 4979 generic.go:334] "Generic (PLEG): container finished" podID="44df4390-d39d-42b7-904c-99d3e9680768" containerID="2f2fbcbfa3fb8957bd22dbbdae0f118ed4065b8e1b28fd2310cab48fd875577d" exitCode=143 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.165845 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b3f71c-0345-4261-8d0c-e7d700eb2932" path="/var/lib/kubelet/pods/11b3f71c-0345-4261-8d0c-e7d700eb2932/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.167940 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170f93fa-8e66-4ae0-ab49-b2db51c1afa5" path="/var/lib/kubelet/pods/170f93fa-8e66-4ae0-ab49-b2db51c1afa5/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.169116 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175f02fa-3089-4350-a658-c939f6e6ef9f" path="/var/lib/kubelet/pods/175f02fa-3089-4350-a658-c939f6e6ef9f/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.171016 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181d93b8-d7d4-4184-beb4-f4e96f221af5" path="/var/lib/kubelet/pods/181d93b8-d7d4-4184-beb4-f4e96f221af5/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.175497 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8a7dfa-7a48-4b28-b2c1-22ae610f004a" path="/var/lib/kubelet/pods/4a8a7dfa-7a48-4b28-b2c1-22ae610f004a/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.180886 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd39b08-adf2-44da-b301-8e8694590426" path="/var/lib/kubelet/pods/6dd39b08-adf2-44da-b301-8e8694590426/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.183946 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7743e00f-3d49-4d9f-8057-f86dc7dc8f0e" path="/var/lib/kubelet/pods/7743e00f-3d49-4d9f-8057-f86dc7dc8f0e/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.186131 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82508003-60c8-463b-92a9-bc9521fcfa03" path="/var/lib/kubelet/pods/82508003-60c8-463b-92a9-bc9521fcfa03/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.192498 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83840d8c-fe62-449c-a3ab-5404215dce87" path="/var/lib/kubelet/pods/83840d8c-fe62-449c-a3ab-5404215dce87/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: W0130 22:06:37.205091 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d7f5965_9d27_4649_bb8f_9e99a57c0362.slice/crio-7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139 WatchSource:0}: Error finding container 7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139: Status 404 returned error can't find the container with id 7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.205266 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "817d8847-f022-4837-834f-a0e4b124f7ea" (UID: "817d8847-f022-4837-834f-a0e4b124f7ea"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.207550 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.207596 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/817d8847-f022-4837-834f-a0e4b124f7ea-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.215796 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2df91e7-6710-4ee4-a671-4b19dc5c2798" path="/var/lib/kubelet/pods/a2df91e7-6710-4ee4-a671-4b19dc5c2798/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.222022 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc0c5054-9597-4b94-a1d6-1f424c1d6de4" path="/var/lib/kubelet/pods/bc0c5054-9597-4b94-a1d6-1f424c1d6de4/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.222230 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" exitCode=0 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.222742 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b67e98-62a7-4a61-835e-8b7ec20167f3" path="/var/lib/kubelet/pods/f8b67e98-62a7-4a61-835e-8b7ec20167f3/volumes" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.230016 4979 scope.go:117] "RemoveContainer" containerID="e0b4d6ab18b18def097e57b8f8ea312d94d6ebc53da831f12d75273becc95e4d" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.231021 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "placement" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="placement" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.231335 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "nova_cell0" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="nova_cell0" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.231577 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "neutron" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="neutron" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.231789 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "nova_api" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="nova_api" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.232409 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" podUID="8573fb5d-0536-4182-95b7-f8d0a16ce994" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.232451 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-0121-account-create-update-cjfbd" podUID="5d7f5965-9d27-4649-bb8f-9e99a57c0362" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.232616 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-d511-account-create-update-gfm26" podUID="e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.232885 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-1082-account-create-update-vm4l4" podUID="b0f67cef-fc43-42c0-967e-d51d1730b419" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.242915 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lz8zj_817d8847-f022-4837-834f-a0e4b124f7ea/openstack-network-exporter/0.log" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.243075 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lz8zj" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.260709 4979 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-nxlz6" secret="" err="secret \"galera-openstack-cell1-dockercfg-wj9ck\" not found" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.260778 4979 scope.go:117] "RemoveContainer" containerID="23ad8510aef46a03d09be8ae445862a192f01f665f34f44f707f525fa87b806a" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.261074 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-nxlz6_openstack(2ae1b557-b27a-4331-8c91-bb1934e91fce)\"" pod="openstack/root-account-create-update-nxlz6" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261471 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerDied","Data":"edcc79875734fdba9dd8e28171366d93b289c592ed8ec92b3fba51d021505e99"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261548 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261579 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerDied","Data":"2f2fbcbfa3fb8957bd22dbbdae0f118ed4065b8e1b28fd2310cab48fd875577d"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261601 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerDied","Data":"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261625 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lz8zj" event={"ID":"817d8847-f022-4837-834f-a0e4b124f7ea","Type":"ContainerDied","Data":"4155908da65ed980762b6600d6cd531e31e34d1e8a5cf0688a19ba647961bebc"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.261643 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nxlz6" event={"ID":"2ae1b557-b27a-4331-8c91-bb1934e91fce","Type":"ContainerStarted","Data":"23ad8510aef46a03d09be8ae445862a192f01f665f34f44f707f525fa87b806a"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.289530 4979 generic.go:334] "Generic (PLEG): container finished" podID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerID="70c9e4b75f4b6026504bbe59f295f79a6dc13bad465ac3a98878072f04debbd7" exitCode=143 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.290546 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerDied","Data":"70c9e4b75f4b6026504bbe59f295f79a6dc13bad465ac3a98878072f04debbd7"} Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.307163 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bae0355-ad11-48d3-a13f-378354677f77" (UID: "4bae0355-ad11-48d3-a13f-378354677f77"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.315227 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") pod \"nova-cell1-016f-account-create-update-nh2b8\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.316393 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bae0355-ad11-48d3-a13f-378354677f77-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.317121 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.317172 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts podName:2ae1b557-b27a-4331-8c91-bb1934e91fce nodeName:}" failed. No retries permitted until 2026-01-30 22:06:39.317154018 +0000 UTC m=+1595.278401051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts") pod "root-account-create-update-nxlz6" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.321754 4979 projected.go:194] Error preparing data for projected volume kube-api-access-kzdcz for pod openstack/nova-cell1-016f-account-create-update-nh2b8: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.321869 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:41.321844254 +0000 UTC m=+1597.283091427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kzdcz" (UniqueName: "kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.329507 4979 scope.go:117] "RemoveContainer" containerID="68738a2810356039fe36b036d04e6e47dff0836ae08b737f9907c8607fb78312" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.334691 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:37 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: if [ -n "cinder" ]; then Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="cinder" Jan 30 22:06:37 crc kubenswrapper[4979]: else Jan 30 22:06:37 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:37 crc kubenswrapper[4979]: fi Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:37 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:37 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:37 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:37 crc kubenswrapper[4979]: # support updates Jan 30 22:06:37 crc kubenswrapper[4979]: Jan 30 22:06:37 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.336456 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-18a2-account-create-update-tgfqm" podUID="d4fc1eef-47e7-4fdd-9642-da7ce95056e8" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.389121 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.389281 4979 scope.go:117] "RemoveContainer" containerID="cb53a0bf80799a9038c0ec96174830f51ef5adf97bb87b1dc554e2dbe52de608" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.389468 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-httpd" containerID="cri-o://b5cd75c070f4563e5400007f2a3b5fc99f54b10f69882167ae699e694edff112" gracePeriod=30 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.389686 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-server" containerID="cri-o://6a656e436b19b339c0c277b8bbce77e23d12a120c342e1158752b1f56079e1d7" gracePeriod=30 Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401133 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401627 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="init" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401646 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="init" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401665 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="ovsdbserver-sb" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401671 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="ovsdbserver-sb" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401691 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817d8847-f022-4837-834f-a0e4b124f7ea" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401698 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="817d8847-f022-4837-834f-a0e4b124f7ea" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401720 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="dnsmasq-dns" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401727 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="dnsmasq-dns" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401734 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401740 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401758 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="ovsdbserver-nb" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401764 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="ovsdbserver-nb" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.401780 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401789 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401966 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="ovsdbserver-nb" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401985 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.401996 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="817d8847-f022-4837-834f-a0e4b124f7ea" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.402013 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" containerName="ovsdbserver-sb" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.402019 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" containerName="openstack-network-exporter" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.402088 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bae0355-ad11-48d3-a13f-378354677f77" containerName="dnsmasq-dns" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.404562 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.410758 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.418685 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:37 crc kubenswrapper[4979]: E0130 22:06:37.418762 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts podName:f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:41.418741401 +0000 UTC m=+1597.379988434 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts") pod "nova-cell1-016f-account-create-update-nh2b8" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.444544 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.448462 4979 scope.go:117] "RemoveContainer" containerID="ff005d24d962eb84bd10a56b66ec88ce9be0ba0641162443a679b4594c534402" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.469987 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.481342 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.499736 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.508995 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.517941 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.521913 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.521971 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6sfh\" (UniqueName: \"kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.527576 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.543497 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.545907 4979 scope.go:117] "RemoveContainer" containerID="364e682e6c255c1ae57ab43188da7c33d808a98976158abfaa1e6b315ea3de7e" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.553143 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.619485 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.627350 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.627452 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6sfh\" (UniqueName: \"kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.628822 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.630326 4979 scope.go:117] "RemoveContainer" containerID="bb8bcac19d63070cb472f5498c791e719cc957cf60e16d8441a9b6a9f88dbeff" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.676010 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6sfh\" (UniqueName: \"kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh\") pod \"root-account-create-update-czjz7\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.693779 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.723992 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-lz8zj"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.734050 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.793177 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:06:37 crc kubenswrapper[4979]: I0130 22:06:37.802183 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-kdhtr"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:37.895790 4979 secret.go:188] Couldn't get secret openstack/neutron-config: secret "neutron-config" not found Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:37.896319 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:41.896302742 +0000 UTC m=+1597.857549775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-config" not found Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.001082 4979 secret.go:188] Couldn't get secret openstack/neutron-httpd-config: secret "neutron-httpd-config" not found Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.001177 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:42.001153143 +0000 UTC m=+1597.962400176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "httpd-config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-httpd-config" not found Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.219378 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.221255 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.222916 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.222975 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="ovn-northd" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.309521 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.325528 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerDied","Data":"3c9f500d96b7f2b3e97c54f28c77ed3aa52150d439c4b7859470421455c33714"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.325316 4979 generic.go:334] "Generic (PLEG): container finished" podID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerID="3c9f500d96b7f2b3e97c54f28c77ed3aa52150d439c4b7859470421455c33714" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.335342 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-cjfbd" event={"ID":"5d7f5965-9d27-4649-bb8f-9e99a57c0362","Type":"ContainerStarted","Data":"7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139"} Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.346730 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb is running failed: container process not found" containerID="383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.368703 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb is running failed: container process not found" containerID="383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.377653 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb is running failed: container process not found" containerID="383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.377735 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="c04339fa-9eb7-4671-895b-ef768888add0" containerName="nova-cell0-conductor-conductor" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.411086 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs\") pod \"95748319-965e-49d8-8a00-c0bc1025337d\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.411472 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs\") pod \"95748319-965e-49d8-8a00-c0bc1025337d\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.411803 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle\") pod \"95748319-965e-49d8-8a00-c0bc1025337d\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.411828 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2vbh\" (UniqueName: \"kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh\") pod \"95748319-965e-49d8-8a00-c0bc1025337d\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.411925 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data\") pod \"95748319-965e-49d8-8a00-c0bc1025337d\" (UID: \"95748319-965e-49d8-8a00-c0bc1025337d\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.424884 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh" (OuterVolumeSpecName: "kube-api-access-t2vbh") pod "95748319-965e-49d8-8a00-c0bc1025337d" (UID: "95748319-965e-49d8-8a00-c0bc1025337d"). InnerVolumeSpecName "kube-api-access-t2vbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.439206 4979 generic.go:334] "Generic (PLEG): container finished" podID="c04339fa-9eb7-4671-895b-ef768888add0" containerID="383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.439323 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c04339fa-9eb7-4671-895b-ef768888add0","Type":"ContainerDied","Data":"383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.449904 4979 generic.go:334] "Generic (PLEG): container finished" podID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerID="6a656e436b19b339c0c277b8bbce77e23d12a120c342e1158752b1f56079e1d7" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.449945 4979 generic.go:334] "Generic (PLEG): container finished" podID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerID="b5cd75c070f4563e5400007f2a3b5fc99f54b10f69882167ae699e694edff112" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.450000 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerDied","Data":"6a656e436b19b339c0c277b8bbce77e23d12a120c342e1158752b1f56079e1d7"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.450096 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerDied","Data":"b5cd75c070f4563e5400007f2a3b5fc99f54b10f69882167ae699e694edff112"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.464202 4979 generic.go:334] "Generic (PLEG): container finished" podID="f69eed38-4641-4703-8a87-93aedebfbff1" containerID="e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.464276 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f69eed38-4641-4703-8a87-93aedebfbff1","Type":"ContainerDied","Data":"e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395"} Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.465533 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395 is running failed: container process not found" containerID="e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.474818 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395 is running failed: container process not found" containerID="e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.478647 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395 is running failed: container process not found" containerID="e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.478706 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" containerName="nova-scheduler-scheduler" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.482011 4979 generic.go:334] "Generic (PLEG): container finished" podID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerID="b8dd50aa90c7ce48431a68126a4e4bcee3261b44260cf48698bd70f7bf026dc4" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.482120 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerDied","Data":"b8dd50aa90c7ce48431a68126a4e4bcee3261b44260cf48698bd70f7bf026dc4"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.489378 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-6c4qp" event={"ID":"fe035ddd-73a5-43fd-8b1d-343447e1f850","Type":"ContainerStarted","Data":"7c31f7e9f74ed851718e7a8c33feb2aea66305bcdf09aebc96b0e2bef13aabcb"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.496223 4979 generic.go:334] "Generic (PLEG): container finished" podID="95748319-965e-49d8-8a00-c0bc1025337d" containerID="4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b" exitCode=0 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.496387 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"95748319-965e-49d8-8a00-c0bc1025337d","Type":"ContainerDied","Data":"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.496439 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"95748319-965e-49d8-8a00-c0bc1025337d","Type":"ContainerDied","Data":"e4ebf1d98c1bb7fabf7f4934a42326f7066ad90f4a383e7cfc22047a4c8c52a0"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.496549 4979 scope.go:117] "RemoveContainer" containerID="4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.496632 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.499102 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data" (OuterVolumeSpecName: "config-data") pod "95748319-965e-49d8-8a00-c0bc1025337d" (UID: "95748319-965e-49d8-8a00-c0bc1025337d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.502820 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95748319-965e-49d8-8a00-c0bc1025337d" (UID: "95748319-965e-49d8-8a00-c0bc1025337d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.521713 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" event={"ID":"8573fb5d-0536-4182-95b7-f8d0a16ce994","Type":"ContainerStarted","Data":"2cdbf10e9bfd98e947cbb9f05f455256f57b0a782b7a6b3d4f66686e4d98d351"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.522081 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.522132 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2vbh\" (UniqueName: \"kubernetes.io/projected/95748319-965e-49d8-8a00-c0bc1025337d-kube-api-access-t2vbh\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.522144 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.545859 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "95748319-965e-49d8-8a00-c0bc1025337d" (UID: "95748319-965e-49d8-8a00-c0bc1025337d"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.546343 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-gfm26" event={"ID":"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7","Type":"ContainerStarted","Data":"e8bb199dbcb75afb57868f8e864cd1c6d1708ad3328ad435676d6e46b226671a"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.558518 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "95748319-965e-49d8-8a00-c0bc1025337d" (UID: "95748319-965e-49d8-8a00-c0bc1025337d"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.564020 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1082-account-create-update-vm4l4" event={"ID":"b0f67cef-fc43-42c0-967e-d51d1730b419","Type":"ContainerStarted","Data":"5fa59fc53b68ec331969f337e5f543de6c0346b458c5cdf6ee3c5a15c6e73440"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.597250 4979 generic.go:334] "Generic (PLEG): container finished" podID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerID="23ad8510aef46a03d09be8ae445862a192f01f665f34f44f707f525fa87b806a" exitCode=1 Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.597371 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nxlz6" event={"ID":"2ae1b557-b27a-4331-8c91-bb1934e91fce","Type":"ContainerDied","Data":"23ad8510aef46a03d09be8ae445862a192f01f665f34f44f707f525fa87b806a"} Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.598120 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.635843 4979 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.635880 4979 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/95748319-965e-49d8-8a00-c0bc1025337d-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.661279 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.751809 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts\") pod \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\" (UID: \"f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2\") " Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.756621 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2" (UID: "f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.764678 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.783074 4979 scope.go:117] "RemoveContainer" containerID="4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b" Jan 30 22:06:38 crc kubenswrapper[4979]: E0130 22:06:38.805491 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b\": container with ID starting with 4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b not found: ID does not exist" containerID="4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.805541 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b"} err="failed to get container status \"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b\": rpc error: code = NotFound desc = could not find container \"4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b\": container with ID starting with 4769efd786bf73a004aa4057340d6285069ae9f2b2cc15ffe729a3e64a06c70b not found: ID does not exist" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.805571 4979 scope.go:117] "RemoveContainer" containerID="8e3dce5a3229b4152f9145f314182cfb310de1a43da227935ba4d0e27f26cb66" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.887775 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": read tcp 10.217.0.2:59192->10.217.0.168:8776: read: connection reset by peer" Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.939383 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:06:38 crc kubenswrapper[4979]: I0130 22:06:38.953529 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.088317 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.088586 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.123604 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bae0355-ad11-48d3-a13f-378354677f77" path="/var/lib/kubelet/pods/4bae0355-ad11-48d3-a13f-378354677f77/volumes" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.124650 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817d8847-f022-4837-834f-a0e4b124f7ea" path="/var/lib/kubelet/pods/817d8847-f022-4837-834f-a0e4b124f7ea/volumes" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.127143 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95748319-965e-49d8-8a00-c0bc1025337d" path="/var/lib/kubelet/pods/95748319-965e-49d8-8a00-c0bc1025337d/volumes" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.130642 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16537b0-b66e-4bad-a481-9d2755cf6eb5" path="/var/lib/kubelet/pods/e16537b0-b66e-4bad-a481-9d2755cf6eb5/volumes" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.134083 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a49e0c-0043-4326-b478-981d19e6480b" path="/var/lib/kubelet/pods/e8a49e0c-0043-4326-b478-981d19e6480b/volumes" Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.175484 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.181220 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.184219 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.184304 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.411625 4979 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.411720 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts podName:2ae1b557-b27a-4331-8c91-bb1934e91fce nodeName:}" failed. No retries permitted until 2026-01-30 22:06:43.411698239 +0000 UTC m=+1599.372945272 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts") pod "root-account-create-update-nxlz6" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce") : configmap "openstack-cell1-scripts" not found Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.455774 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:41584->10.217.0.208:8775: read: connection reset by peer" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.455991 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:41568->10.217.0.208:8775: read: connection reset by peer" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.632664 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.674746 4979 generic.go:334] "Generic (PLEG): container finished" podID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerID="33be242a70bfcf61aafc753268bb59c2e8a2a55bfc2666cef9e675491b558cd9" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.674881 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerDied","Data":"33be242a70bfcf61aafc753268bb59c2e8a2a55bfc2666cef9e675491b558cd9"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.684140 4979 generic.go:334] "Generic (PLEG): container finished" podID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerID="5ac3c882827d52df05b6724629ccc459728f629242f9b9649899fbfb3897e504" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.684237 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerDied","Data":"5ac3c882827d52df05b6724629ccc459728f629242f9b9649899fbfb3897e504"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.703056 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerID="b87dfaf39281615f48403ce307bb51ad9f7df21ce90a59879ea17a4270453139" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.703165 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerDied","Data":"b87dfaf39281615f48403ce307bb51ad9f7df21ce90a59879ea17a4270453139"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.728347 4979 generic.go:334] "Generic (PLEG): container finished" podID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerID="10bc5c2d6026fb9b6e38741866768cd6cce92452ca56fb4384be71b3bffc65c0" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.728931 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerDied","Data":"10bc5c2d6026fb9b6e38741866768cd6cce92452ca56fb4384be71b3bffc65c0"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729251 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729343 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6wgl\" (UniqueName: \"kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729392 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729435 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729465 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729498 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729529 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.729619 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default\") pod \"51b68702-8d5d-43f3-b4e7-936ceb5de933\" (UID: \"51b68702-8d5d-43f3-b4e7-936ceb5de933\") " Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.730357 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.730440 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data podName:981f1fee-4d2a-4d80-bf38-80557b6c5033 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:47.730416845 +0000 UTC m=+1603.691663878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data") pod "rabbitmq-cell1-server-0" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033") : configmap "rabbitmq-cell1-config-data" not found Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.738155 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.743620 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.743923 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.757098 4979 generic.go:334] "Generic (PLEG): container finished" podID="44df4390-d39d-42b7-904c-99d3e9680768" containerID="99f9e7602668b98789ff476044ada1b106a498ed44ed34ee5c2700adce022186" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.757225 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerDied","Data":"99f9e7602668b98789ff476044ada1b106a498ed44ed34ee5c2700adce022186"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.757951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl" (OuterVolumeSpecName: "kube-api-access-l6wgl") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "kube-api-access-l6wgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.764115 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.777732 4979 generic.go:334] "Generic (PLEG): container finished" podID="b0baa205-eff4-4cad-a27f-db3599bba092" containerID="aa559b1135f6618404d0e60d9a772fc66e419ae78eeefe9bc432ad7bad847635" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.777887 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerDied","Data":"aa559b1135f6618404d0e60d9a772fc66e419ae78eeefe9bc432ad7bad847635"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.784541 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.797369 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.798545 4979 generic.go:334] "Generic (PLEG): container finished" podID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerID="9d8dfa3f28e549253bc3c74adc2593d512df4a8ba19da4e9daca2c7d742b4a42" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.798595 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerDied","Data":"9d8dfa3f28e549253bc3c74adc2593d512df4a8ba19da4e9daca2c7d742b4a42"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831899 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831931 4979 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831940 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831949 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831960 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6wgl\" (UniqueName: \"kubernetes.io/projected/51b68702-8d5d-43f3-b4e7-936ceb5de933-kube-api-access-l6wgl\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831982 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.831992 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51b68702-8d5d-43f3-b4e7-936ceb5de933-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.859243 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"51b68702-8d5d-43f3-b4e7-936ceb5de933","Type":"ContainerDied","Data":"5c5282dd71d589822510ea8f2d38d385c993be6f5e42e4d1471904abd0c28e55"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.859315 4979 scope.go:117] "RemoveContainer" containerID="b8dd50aa90c7ce48431a68126a4e4bcee3261b44260cf48698bd70f7bf026dc4" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.859471 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.866244 4979 generic.go:334] "Generic (PLEG): container finished" podID="94177def-b41a-4af1-bcce-a0673da9f81c" containerID="1e3a41213e0b64183674077174838e4b857951ec8d86a2d97f557ed86825981e" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.866350 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerDied","Data":"1e3a41213e0b64183674077174838e4b857951ec8d86a2d97f557ed86825981e"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.891143 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "51b68702-8d5d-43f3-b4e7-936ceb5de933" (UID: "51b68702-8d5d-43f3-b4e7-936ceb5de933"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.895976 4979 generic.go:334] "Generic (PLEG): container finished" podID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerID="db8279f109bd17f628e44659d3d7f1d466d6bb9b71489014bb4d28dd40cb2a62" exitCode=0 Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.896090 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-016f-account-create-update-nh2b8" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.897412 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerDied","Data":"db8279f109bd17f628e44659d3d7f1d466d6bb9b71489014bb4d28dd40cb2a62"} Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.938357 4979 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51b68702-8d5d-43f3-b4e7-936ceb5de933-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.938464 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.938526 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:47.938508015 +0000 UTC m=+1603.899755048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-scripts" not found Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.938857 4979 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 30 22:06:39 crc kubenswrapper[4979]: E0130 22:06:39.938886 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config podName:e7cc7cf6-3592-4e25-9578-27ae56d6909b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:47.938879495 +0000 UTC m=+1603.900126518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config") pod "ovn-northd-0" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b") : configmap "ovnnorthd-config" not found Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.995656 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6cd6984846-6pk8x" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: connect: connection refused" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.996001 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6cd6984846-6pk8x" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: connect: connection refused" Jan 30 22:06:39 crc kubenswrapper[4979]: I0130 22:06:39.996148 4979 scope.go:117] "RemoveContainer" containerID="92e73fbaf6be7974b5e70d2a4a6be5d1621679737d38de600bb587583fc30031" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.016751 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-nh2b8"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.037964 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-016f-account-create-update-nh2b8"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.118504 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.132490 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.156504 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-central-agent" containerID="cri-o://5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267" gracePeriod=30 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.157559 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="proxy-httpd" containerID="cri-o://93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed" gracePeriod=30 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.157655 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="sg-core" containerID="cri-o://fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511" gracePeriod=30 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.157718 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-notification-agent" containerID="cri-o://b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c" gracePeriod=30 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.162259 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.162322 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzdcz\" (UniqueName: \"kubernetes.io/projected/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2-kube-api-access-kzdcz\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.202746 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.216163 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.268818 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom\") pod \"94177def-b41a-4af1-bcce-a0673da9f81c\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.271309 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data\") pod \"94177def-b41a-4af1-bcce-a0673da9f81c\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.271436 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shqww\" (UniqueName: \"kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww\") pod \"94177def-b41a-4af1-bcce-a0673da9f81c\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.300076 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "94177def-b41a-4af1-bcce-a0673da9f81c" (UID: "94177def-b41a-4af1-bcce-a0673da9f81c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.334804 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww" (OuterVolumeSpecName: "kube-api-access-shqww") pod "94177def-b41a-4af1-bcce-a0673da9f81c" (UID: "94177def-b41a-4af1-bcce-a0673da9f81c"). InnerVolumeSpecName "kube-api-access-shqww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.365169 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.365692 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" containerName="kube-state-metrics" containerID="cri-o://10c1f71e257099ef965fe8ed07f831aabf20fafa7023702d589fe76aa2e8e755" gracePeriod=30 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.384614 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs\") pod \"94177def-b41a-4af1-bcce-a0673da9f81c\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.386074 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs" (OuterVolumeSpecName: "logs") pod "94177def-b41a-4af1-bcce-a0673da9f81c" (UID: "94177def-b41a-4af1-bcce-a0673da9f81c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.384829 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdrzj\" (UniqueName: \"kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj\") pod \"f69eed38-4641-4703-8a87-93aedebfbff1\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.387398 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle\") pod \"94177def-b41a-4af1-bcce-a0673da9f81c\" (UID: \"94177def-b41a-4af1-bcce-a0673da9f81c\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.391636 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94177def-b41a-4af1-bcce-a0673da9f81c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.391669 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.391693 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shqww\" (UniqueName: \"kubernetes.io/projected/94177def-b41a-4af1-bcce-a0673da9f81c-kube-api-access-shqww\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.403495 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj" (OuterVolumeSpecName: "kube-api-access-sdrzj") pod "f69eed38-4641-4703-8a87-93aedebfbff1" (UID: "f69eed38-4641-4703-8a87-93aedebfbff1"). InnerVolumeSpecName "kube-api-access-sdrzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.488951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data" (OuterVolumeSpecName: "config-data") pod "94177def-b41a-4af1-bcce-a0673da9f81c" (UID: "94177def-b41a-4af1-bcce-a0673da9f81c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.495207 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data\") pod \"f69eed38-4641-4703-8a87-93aedebfbff1\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.495645 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle\") pod \"f69eed38-4641-4703-8a87-93aedebfbff1\" (UID: \"f69eed38-4641-4703-8a87-93aedebfbff1\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.497462 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdrzj\" (UniqueName: \"kubernetes.io/projected/f69eed38-4641-4703-8a87-93aedebfbff1-kube-api-access-sdrzj\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.497484 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.505945 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.515735 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.532745 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.591299 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.595876 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data" (OuterVolumeSpecName: "config-data") pod "f69eed38-4641-4703-8a87-93aedebfbff1" (UID: "f69eed38-4641-4703-8a87-93aedebfbff1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.599338 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.625107 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.633301 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f69eed38-4641-4703-8a87-93aedebfbff1" (UID: "f69eed38-4641-4703-8a87-93aedebfbff1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.636731 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.642220 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94177def-b41a-4af1-bcce-a0673da9f81c" (UID: "94177def-b41a-4af1-bcce-a0673da9f81c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.655834 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.678820 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702441 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts\") pod \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702517 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle\") pod \"c04339fa-9eb7-4671-895b-ef768888add0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702550 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts\") pod \"b0f67cef-fc43-42c0-967e-d51d1730b419\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702666 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data\") pod \"c04339fa-9eb7-4671-895b-ef768888add0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702908 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlm9\" (UniqueName: \"kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9\") pod \"b0f67cef-fc43-42c0-967e-d51d1730b419\" (UID: \"b0f67cef-fc43-42c0-967e-d51d1730b419\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.702982 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgphw\" (UniqueName: \"kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw\") pod \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\" (UID: \"d4fc1eef-47e7-4fdd-9642-da7ce95056e8\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.703073 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4sfk\" (UniqueName: \"kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk\") pod \"c04339fa-9eb7-4671-895b-ef768888add0\" (UID: \"c04339fa-9eb7-4671-895b-ef768888add0\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.704703 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0f67cef-fc43-42c0-967e-d51d1730b419" (UID: "b0f67cef-fc43-42c0-967e-d51d1730b419"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.705362 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4fc1eef-47e7-4fdd-9642-da7ce95056e8" (UID: "d4fc1eef-47e7-4fdd-9642-da7ce95056e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.706054 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.706230 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f67cef-fc43-42c0-967e-d51d1730b419-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.706256 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69eed38-4641-4703-8a87-93aedebfbff1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.706271 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94177def-b41a-4af1-bcce-a0673da9f81c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.718839 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9" (OuterVolumeSpecName: "kube-api-access-7rlm9") pod "b0f67cef-fc43-42c0-967e-d51d1730b419" (UID: "b0f67cef-fc43-42c0-967e-d51d1730b419"). InnerVolumeSpecName "kube-api-access-7rlm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.721425 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk" (OuterVolumeSpecName: "kube-api-access-f4sfk") pod "c04339fa-9eb7-4671-895b-ef768888add0" (UID: "c04339fa-9eb7-4671-895b-ef768888add0"). InnerVolumeSpecName "kube-api-access-f4sfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.732557 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw" (OuterVolumeSpecName: "kube-api-access-jgphw") pod "d4fc1eef-47e7-4fdd-9642-da7ce95056e8" (UID: "d4fc1eef-47e7-4fdd-9642-da7ce95056e8"). InnerVolumeSpecName "kube-api-access-jgphw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.775601 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c04339fa-9eb7-4671-895b-ef768888add0" (UID: "c04339fa-9eb7-4671-895b-ef768888add0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.805293 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data" (OuterVolumeSpecName: "config-data") pod "c04339fa-9eb7-4671-895b-ef768888add0" (UID: "c04339fa-9eb7-4671-895b-ef768888add0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807512 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807590 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807658 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807728 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807759 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4nsx\" (UniqueName: \"kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.807888 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808005 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808098 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts\") pod \"2ae1b557-b27a-4331-8c91-bb1934e91fce\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808150 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2cqk\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808192 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808247 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808306 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808332 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd\") pod \"b4e29508-bcd2-4f07-807c-dde529c4fa24\" (UID: \"b4e29508-bcd2-4f07-807c-dde529c4fa24\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808428 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s998g\" (UniqueName: \"kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g\") pod \"2ae1b557-b27a-4331-8c91-bb1934e91fce\" (UID: \"2ae1b557-b27a-4331-8c91-bb1934e91fce\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808467 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808504 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808968 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgphw\" (UniqueName: \"kubernetes.io/projected/d4fc1eef-47e7-4fdd-9642-da7ce95056e8-kube-api-access-jgphw\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808982 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4sfk\" (UniqueName: \"kubernetes.io/projected/c04339fa-9eb7-4671-895b-ef768888add0-kube-api-access-f4sfk\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.808992 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.809001 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04339fa-9eb7-4671-895b-ef768888add0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.809010 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlm9\" (UniqueName: \"kubernetes.io/projected/b0f67cef-fc43-42c0-967e-d51d1730b419-kube-api-access-7rlm9\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: E0130 22:06:40.809097 4979 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 22:06:40 crc kubenswrapper[4979]: E0130 22:06:40.809152 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data podName:e28a1e34-b97c-4090-adf8-fa3e2b766365 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:48.809135852 +0000 UTC m=+1604.770382885 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data") pod "rabbitmq-server-0" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365") : configmap "rabbitmq-config-data" not found Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.811846 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.814525 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.818010 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g" (OuterVolumeSpecName: "kube-api-access-s998g") pod "2ae1b557-b27a-4331-8c91-bb1934e91fce" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce"). InnerVolumeSpecName "kube-api-access-s998g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.818075 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.825289 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk" (OuterVolumeSpecName: "kube-api-access-x2cqk") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "kube-api-access-x2cqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.828817 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ae1b557-b27a-4331-8c91-bb1934e91fce" (UID: "2ae1b557-b27a-4331-8c91-bb1934e91fce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.829683 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.830190 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.839975 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx" (OuterVolumeSpecName: "kube-api-access-d4nsx") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "kube-api-access-d4nsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.846326 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts" (OuterVolumeSpecName: "scripts") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.893326 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data" (OuterVolumeSpecName: "config-data") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.910309 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.910589 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") pod \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\" (UID: \"21dfd874-e50d-4e61-a634-9f47ee92ff4f\") " Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911226 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/21dfd874-e50d-4e61-a634-9f47ee92ff4f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911251 4979 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911264 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911278 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911289 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4nsx\" (UniqueName: \"kubernetes.io/projected/21dfd874-e50d-4e61-a634-9f47ee92ff4f-kube-api-access-d4nsx\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911306 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: W0130 22:06:40.911304 4979 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/21dfd874-e50d-4e61-a634-9f47ee92ff4f/volumes/kubernetes.io~secret/combined-ca-bundle Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911319 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae1b557-b27a-4331-8c91-bb1934e91fce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911327 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911334 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2cqk\" (UniqueName: \"kubernetes.io/projected/b4e29508-bcd2-4f07-807c-dde529c4fa24-kube-api-access-x2cqk\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911370 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911383 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4e29508-bcd2-4f07-807c-dde529c4fa24-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.911421 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s998g\" (UniqueName: \"kubernetes.io/projected/2ae1b557-b27a-4331-8c91-bb1934e91fce-kube-api-access-s998g\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.919138 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d511-account-create-update-gfm26" event={"ID":"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7","Type":"ContainerDied","Data":"e8bb199dbcb75afb57868f8e864cd1c6d1708ad3328ad435676d6e46b226671a"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.919524 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8bb199dbcb75afb57868f8e864cd1c6d1708ad3328ad435676d6e46b226671a" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.933577 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.939493 4979 generic.go:334] "Generic (PLEG): container finished" podID="3b34adef-df84-42dd-a052-5e543c4182b5" containerID="93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed" exitCode=0 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.939818 4979 generic.go:334] "Generic (PLEG): container finished" podID="3b34adef-df84-42dd-a052-5e543c4182b5" containerID="fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511" exitCode=2 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.939991 4979 generic.go:334] "Generic (PLEG): container finished" podID="3b34adef-df84-42dd-a052-5e543c4182b5" containerID="5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267" exitCode=0 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.940346 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerDied","Data":"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.940494 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerDied","Data":"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.940647 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerDied","Data":"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.946464 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b6e4-account-create-update-6c4qp" event={"ID":"fe035ddd-73a5-43fd-8b1d-343447e1f850","Type":"ContainerDied","Data":"7c31f7e9f74ed851718e7a8c33feb2aea66305bcdf09aebc96b0e2bef13aabcb"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.946526 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c31f7e9f74ed851718e7a8c33feb2aea66305bcdf09aebc96b0e2bef13aabcb" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.946655 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.952165 4979 generic.go:334] "Generic (PLEG): container finished" podID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" containerID="10c1f71e257099ef965fe8ed07f831aabf20fafa7023702d589fe76aa2e8e755" exitCode=2 Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.952279 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fe5eba1b-535d-4519-97c5-5e8b8f003d96","Type":"ContainerDied","Data":"10c1f71e257099ef965fe8ed07f831aabf20fafa7023702d589fe76aa2e8e755"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.966108 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"21dfd874-e50d-4e61-a634-9f47ee92ff4f","Type":"ContainerDied","Data":"7b54d9cd9b678a4fb7d379f7d73256fcce04b1be22cb1e39a15b4c8b5b614aed"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.966176 4979 scope.go:117] "RemoveContainer" containerID="998a3106aba2ac42665d88c13615a533640da17728cf5d2d8129a1a9548dfb1e" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.966320 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.972824 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0121-account-create-update-cjfbd" event={"ID":"5d7f5965-9d27-4649-bb8f-9e99a57c0362","Type":"ContainerDied","Data":"7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.972955 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7405887c20d2a30ac20db7b44a4a66f1fa752d1d0716baf1b96905f7efe69139" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.976394 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1082-account-create-update-vm4l4" event={"ID":"b0f67cef-fc43-42c0-967e-d51d1730b419","Type":"ContainerDied","Data":"5fa59fc53b68ec331969f337e5f543de6c0346b458c5cdf6ee3c5a15c6e73440"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.976470 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1082-account-create-update-vm4l4" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.983679 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c04339fa-9eb7-4671-895b-ef768888add0","Type":"ContainerDied","Data":"9644ea1b50d881a5fc87efbeb25d5fe3195c9de5bf0f6fd1b1d5b2e65c2a5124"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.983835 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.986677 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" event={"ID":"8573fb5d-0536-4182-95b7-f8d0a16ce994","Type":"ContainerDied","Data":"2cdbf10e9bfd98e947cbb9f05f455256f57b0a782b7a6b3d4f66686e4d98d351"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.986829 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cdbf10e9bfd98e947cbb9f05f455256f57b0a782b7a6b3d4f66686e4d98d351" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.987815 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b4e29508-bcd2-4f07-807c-dde529c4fa24" (UID: "b4e29508-bcd2-4f07-807c-dde529c4fa24"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.998126 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" event={"ID":"94177def-b41a-4af1-bcce-a0673da9f81c","Type":"ContainerDied","Data":"3dde96c5169697a3e0c9d8b160bc83a4fafb1d44e05b294c10a09b1f06d958c9"} Jan 30 22:06:40 crc kubenswrapper[4979]: I0130 22:06:40.998322 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fddd57b54-bjm4k" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.018056 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.019617 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.019715 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.019833 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e29508-bcd2-4f07-807c-dde529c4fa24-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.018288 4979 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.020076 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts podName:aec2e945-509e-4cbb-9988-9f6cc840cd62 nodeName:}" failed. No retries permitted until 2026-01-30 22:06:49.020043987 +0000 UTC m=+1604.981291080 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts") pod "glance-default-internal-api-0" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62") : secret "glance-scripts" not found Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.021514 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.022085 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data" (OuterVolumeSpecName: "config-data") pod "21dfd874-e50d-4e61-a634-9f47ee92ff4f" (UID: "21dfd874-e50d-4e61-a634-9f47ee92ff4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.022254 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f69eed38-4641-4703-8a87-93aedebfbff1","Type":"ContainerDied","Data":"44999172f23ebc85109e86b0754fcca5c95fcb604e5236af4579e9ca3325bed8"} Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.022332 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.024289 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.024716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nxlz6" event={"ID":"2ae1b557-b27a-4331-8c91-bb1934e91fce","Type":"ContainerDied","Data":"b83c4ed8bbda19ed5aa54ca0fc84bb29d05f7a78681b54738255e43bd19127ba"} Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.024795 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nxlz6" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.038526 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-18a2-account-create-update-tgfqm" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.038543 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-18a2-account-create-update-tgfqm" event={"ID":"d4fc1eef-47e7-4fdd-9642-da7ce95056e8","Type":"ContainerDied","Data":"d9676cb7e0eb5ddecab92aeb166656644b6133c3cd8ff91f6626cb611a3b2256"} Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.045025 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" event={"ID":"b4e29508-bcd2-4f07-807c-dde529c4fa24","Type":"ContainerDied","Data":"b64735411ca3cd7394e31868ccdaa7a77e584aec6259c66bd68d292da88aa3c5"} Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.045477 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.074084 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.076868 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.082398 4979 scope.go:117] "RemoveContainer" containerID="3c9f500d96b7f2b3e97c54f28c77ed3aa52150d439c4b7859470421455c33714" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.121845 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfxkv\" (UniqueName: \"kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv\") pod \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.121927 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts\") pod \"8573fb5d-0536-4182-95b7-f8d0a16ce994\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.122206 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dfzv\" (UniqueName: \"kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv\") pod \"8573fb5d-0536-4182-95b7-f8d0a16ce994\" (UID: \"8573fb5d-0536-4182-95b7-f8d0a16ce994\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.122264 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts\") pod \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\" (UID: \"5d7f5965-9d27-4649-bb8f-9e99a57c0362\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.127995 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d7f5965-9d27-4649-bb8f-9e99a57c0362" (UID: "5d7f5965-9d27-4649-bb8f-9e99a57c0362"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.134132 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d7f5965-9d27-4649-bb8f-9e99a57c0362-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.136088 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21dfd874-e50d-4e61-a634-9f47ee92ff4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.134215 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8573fb5d-0536-4182-95b7-f8d0a16ce994" (UID: "8573fb5d-0536-4182-95b7-f8d0a16ce994"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.141546 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv" (OuterVolumeSpecName: "kube-api-access-9dfzv") pod "8573fb5d-0536-4182-95b7-f8d0a16ce994" (UID: "8573fb5d-0536-4182-95b7-f8d0a16ce994"). InnerVolumeSpecName "kube-api-access-9dfzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.141717 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv" (OuterVolumeSpecName: "kube-api-access-hfxkv") pod "5d7f5965-9d27-4649-bb8f-9e99a57c0362" (UID: "5d7f5965-9d27-4649-bb8f-9e99a57c0362"). InnerVolumeSpecName "kube-api-access-hfxkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.144501 4979 scope.go:117] "RemoveContainer" containerID="383dacc9b8126b9999e1206d4f0446aea969d33784d7f5b9d8d72c0b5d1200fb" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.236876 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhspj\" (UniqueName: \"kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj\") pod \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237072 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts\") pod \"fe035ddd-73a5-43fd-8b1d-343447e1f850\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237147 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcnjv\" (UniqueName: \"kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv\") pod \"fe035ddd-73a5-43fd-8b1d-343447e1f850\" (UID: \"fe035ddd-73a5-43fd-8b1d-343447e1f850\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237235 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts\") pod \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\" (UID: \"e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7\") " Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237861 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dfzv\" (UniqueName: \"kubernetes.io/projected/8573fb5d-0536-4182-95b7-f8d0a16ce994-kube-api-access-9dfzv\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237884 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfxkv\" (UniqueName: \"kubernetes.io/projected/5d7f5965-9d27-4649-bb8f-9e99a57c0362-kube-api-access-hfxkv\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.237898 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8573fb5d-0536-4182-95b7-f8d0a16ce994-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.238546 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe035ddd-73a5-43fd-8b1d-343447e1f850" (UID: "fe035ddd-73a5-43fd-8b1d-343447e1f850"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.256763 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv" (OuterVolumeSpecName: "kube-api-access-gcnjv") pod "fe035ddd-73a5-43fd-8b1d-343447e1f850" (UID: "fe035ddd-73a5-43fd-8b1d-343447e1f850"). InnerVolumeSpecName "kube-api-access-gcnjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.257113 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7" (UID: "e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.275182 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj" (OuterVolumeSpecName: "kube-api-access-hhspj") pod "e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7" (UID: "e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7"). InnerVolumeSpecName "kube-api-access-hhspj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.343148 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.343181 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhspj\" (UniqueName: \"kubernetes.io/projected/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7-kube-api-access-hhspj\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.343191 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe035ddd-73a5-43fd-8b1d-343447e1f850-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.343201 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcnjv\" (UniqueName: \"kubernetes.io/projected/fe035ddd-73a5-43fd-8b1d-343447e1f850-kube-api-access-gcnjv\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.346117 4979 scope.go:117] "RemoveContainer" containerID="1e3a41213e0b64183674077174838e4b857951ec8d86a2d97f557ed86825981e" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.700444 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.701260 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.701529 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.701557 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.702691 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.704949 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.706370 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.706454 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.823202 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kxk8g" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.905120 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" path="/var/lib/kubelet/pods/51b68702-8d5d-43f3-b4e7-936ceb5de933/volumes" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.907257 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2" path="/var/lib/kubelet/pods/f715ec9f-ff5c-4c2a-ac4e-8fef7557f3b2/volumes" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910023 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910078 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910350 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910372 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nxlz6"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910393 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910408 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1082-account-create-update-vm4l4"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910426 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910446 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bb3f-account-create-update-dc7fc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910461 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910475 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6d7cdf56b7-lf2dc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910495 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bb3f-account-create-update-dc7fc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910514 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bb3f-account-create-update-f78xh"] Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.910943 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04339fa-9eb7-4671-895b-ef768888add0" containerName="nova-cell0-conductor-conductor" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910961 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04339fa-9eb7-4671-895b-ef768888add0" containerName="nova-cell0-conductor-conductor" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.910979 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener-log" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.910990 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener-log" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911006 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911015 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911028 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="mysql-bootstrap" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911669 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="mysql-bootstrap" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911685 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911695 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911709 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" containerName="nova-scheduler-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911717 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" containerName="nova-scheduler-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911729 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-server" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911737 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-server" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911752 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="galera" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911759 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="galera" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911781 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-httpd" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911789 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-httpd" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911800 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911809 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911822 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="cinder-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911832 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="cinder-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911845 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="probe" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911854 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="probe" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.911864 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95748319-965e-49d8-8a00-c0bc1025337d" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.911871 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="95748319-965e-49d8-8a00-c0bc1025337d" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912138 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener-log" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912156 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="probe" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912173 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" containerName="nova-scheduler-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912188 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912204 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" containerName="barbican-keystone-listener" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912213 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-server" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912223 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-httpd" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912232 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" containerName="cinder-scheduler" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912243 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" containerName="mariadb-account-create-update" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912258 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b68702-8d5d-43f3-b4e7-936ceb5de933" containerName="galera" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912269 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04339fa-9eb7-4671-895b-ef768888add0" containerName="nova-cell0-conductor-conductor" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.912284 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="95748319-965e-49d8-8a00-c0bc1025337d" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913086 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb3f-account-create-update-f78xh"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913107 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dmn2z"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913122 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-tj4gc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913138 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-tj4gc"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913155 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dmn2z"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913170 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913190 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913215 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.913232 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zct57"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.915023 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" containerName="memcached" containerID="cri-o://11167d299d7103f588d853413dc7b7095145b87d82239c5f576cb6d82dbfce8a" gracePeriod=30 Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.915280 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.915757 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-f5778c484-5rg8p" podUID="93c29874-a63d-4d35-a1a6-256d811ac6f8" containerName="keystone-api" containerID="cri-o://dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db" gracePeriod=30 Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.918519 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.932005 4979 secret.go:188] Couldn't get secret openstack/neutron-config: secret "neutron-config" not found Jan 30 22:06:41 crc kubenswrapper[4979]: E0130 22:06:41.940081 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:49.93961172 +0000 UTC m=+1605.900858743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-config" not found Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.942251 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zct57"] Jan 30 22:06:41 crc kubenswrapper[4979]: I0130 22:06:41.985629 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bb3f-account-create-update-f78xh"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.000622 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kxk8g" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" probeResult="failure" output=< Jan 30 22:06:42 crc kubenswrapper[4979]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 30 22:06:42 crc kubenswrapper[4979]: > Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.033483 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.033581 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.033813 4979 secret.go:188] Couldn't get secret openstack/neutron-httpd-config: secret "neutron-httpd-config" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.033939 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config podName:c8cc63f5-501a-4bd5-962b-a1f218fbbcdd nodeName:}" failed. No retries permitted until 2026-01-30 22:06:50.033908598 +0000 UTC m=+1605.995155811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "httpd-config" (UniqueName: "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config") pod "neutron-ccc5789d5-9fbcz" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd") : secret "neutron-httpd-config" not found Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.069482 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" event={"ID":"cdfe8d13-8537-4477-ae9e-5c9aa6e104de","Type":"ContainerDied","Data":"bb24789e94c037f8d2c30cb247391e1793581183cde1ad3d02b4c483f6507c5b"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.069645 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb24789e94c037f8d2c30cb247391e1793581183cde1ad3d02b4c483f6507c5b" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.087363 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cd6984846-6pk8x" event={"ID":"5c466a98-f01c-49ab-841a-8f35c54e71f3","Type":"ContainerDied","Data":"ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.087447 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba4330dae356e6288f48e7433253a51e62211bb964fb07d760695db2d247a961" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.094614 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aec2e945-509e-4cbb-9988-9f6cc840cd62","Type":"ContainerDied","Data":"990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.094680 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="990e62f23c4a472cdff8c54aae9968515af6d18a52e99ad51f4c27a84120a7dd" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.098670 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5574d874bd-cg256" event={"ID":"c808d1a7-071b-4af7-b86d-adbc0e98803b","Type":"ContainerDied","Data":"c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.098721 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71bfcc6c14d502ef3f1710a10249e134e050a56fd12f729024104e4faa161e9" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.101814 4979 generic.go:334] "Generic (PLEG): container finished" podID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerID="32737030f36aec701cd5a18ee26db33f1920b61eff0e7b5c5143eb68b64ad2a2" exitCode=0 Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.101869 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerDied","Data":"32737030f36aec701cd5a18ee26db33f1920b61eff0e7b5c5143eb68b64ad2a2"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.104600 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0baa205-eff4-4cad-a27f-db3599bba092","Type":"ContainerDied","Data":"1ef7dfba2654b435b80b29127f1c9700a1f54fff7b56b29307a2ed4beab2ff4b"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.104635 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ef7dfba2654b435b80b29127f1c9700a1f54fff7b56b29307a2ed4beab2ff4b" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.113858 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fe5eba1b-535d-4519-97c5-5e8b8f003d96","Type":"ContainerDied","Data":"e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.113912 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e767f426672122a96f0cd7039ae94afca30f78fd0f314386c2949731da06d561" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.134489 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54d2662c-bd60-4a08-accd-e30f0a51518c","Type":"ContainerDied","Data":"63cab1632ab5734414fe0ad9e4d6c6c07d6d67f4ee2af410de1ca78ec4b0eb26"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.134550 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63cab1632ab5734414fe0ad9e4d6c6c07d6d67f4ee2af410de1ca78ec4b0eb26" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.136637 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.136700 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.136843 4979 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.136906 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:42.636887279 +0000 UTC m=+1598.598134312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : configmap "openstack-scripts" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.141369 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cn72x for pod openstack/keystone-bb3f-account-create-update-f78xh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.141473 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:42.641445981 +0000 UTC m=+1598.602693014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cn72x" (UniqueName: "kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.143705 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ae89cf4-f9f4-456b-947f-be87514b79ff","Type":"ContainerDied","Data":"d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.143784 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d676db9e0437471efdaf50743e5441a714dacf3c96e2d551ea726b731e77a900" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.143728 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.160960 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-504c-account-create-update-wjh5g" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.161138 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0121-account-create-update-cjfbd" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.161217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44df4390-d39d-42b7-904c-99d3e9680768","Type":"ContainerDied","Data":"310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff"} Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.161274 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d511-account-create-update-gfm26" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.161960 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b6e4-account-create-update-6c4qp" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.162343 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310f6153774de835ecceb3e7b4bfe47eaf94f357a8b3af4b2a3390f2be2a89ff" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.173118 4979 scope.go:117] "RemoveContainer" containerID="0a36922f832fee9028934a3bf94046644f1757e67d16e088681eff93cf07c0b1" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.173436 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.173452 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.183928 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-18a2-account-create-update-tgfqm"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.193686 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.202378 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.215233 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.227585 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.227704 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7fddd57b54-bjm4k"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.232788 4979 scope.go:117] "RemoveContainer" containerID="e770b91e07387f2cc7714abec872d9f336f062619e82ff5a821a881fae71b395" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238087 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238157 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238247 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238330 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238356 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238570 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw6m2\" (UniqueName: \"kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.238636 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle\") pod \"5c466a98-f01c-49ab-841a-8f35c54e71f3\" (UID: \"5c466a98-f01c-49ab-841a-8f35c54e71f3\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.239790 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs" (OuterVolumeSpecName: "logs") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.245401 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2" (OuterVolumeSpecName: "kube-api-access-fw6m2") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "kube-api-access-fw6m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.248297 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.249500 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.254807 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="galera" containerID="cri-o://62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" gracePeriod=30 Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.290553 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.292500 4979 scope.go:117] "RemoveContainer" containerID="23ad8510aef46a03d09be8ae445862a192f01f665f34f44f707f525fa87b806a" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.319603 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.328515 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.329540 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345451 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zhrl\" (UniqueName: \"kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345505 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs\") pod \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345568 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data" (OuterVolumeSpecName: "config-data") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345705 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345800 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345866 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345933 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.345971 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data\") pod \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346019 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxxns\" (UniqueName: \"kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns\") pod \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346167 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346225 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle\") pod \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346257 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346379 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346460 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txcpg\" (UniqueName: \"kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346499 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom\") pod \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\" (UID: \"cdfe8d13-8537-4477-ae9e-5c9aa6e104de\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346539 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346563 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs\") pod \"c808d1a7-071b-4af7-b86d-adbc0e98803b\" (UID: \"c808d1a7-071b-4af7-b86d-adbc0e98803b\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.346588 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs\") pod \"3ae89cf4-f9f4-456b-947f-be87514b79ff\" (UID: \"3ae89cf4-f9f4-456b-947f-be87514b79ff\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.347120 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs" (OuterVolumeSpecName: "logs") pod "cdfe8d13-8537-4477-ae9e-5c9aa6e104de" (UID: "cdfe8d13-8537-4477-ae9e-5c9aa6e104de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.347614 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.347910 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c466a98-f01c-49ab-841a-8f35c54e71f3-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.347992 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.348095 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.348191 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.348262 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.348332 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw6m2\" (UniqueName: \"kubernetes.io/projected/5c466a98-f01c-49ab-841a-8f35c54e71f3-kube-api-access-fw6m2\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.350566 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c466a98-f01c-49ab-841a-8f35c54e71f3" (UID: "5c466a98-f01c-49ab-841a-8f35c54e71f3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.355348 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cn72x operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-bb3f-account-create-update-f78xh" podUID="ba12ac60-82de-4c7b-9411-4f36b0aedf3b" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.357842 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs" (OuterVolumeSpecName: "logs") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.358155 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl" (OuterVolumeSpecName: "kube-api-access-4zhrl") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "kube-api-access-4zhrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.358999 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs" (OuterVolumeSpecName: "logs") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.359156 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cdfe8d13-8537-4477-ae9e-5c9aa6e104de" (UID: "cdfe8d13-8537-4477-ae9e-5c9aa6e104de"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.360901 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.380448 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns" (OuterVolumeSpecName: "kube-api-access-qxxns") pod "cdfe8d13-8537-4477-ae9e-5c9aa6e104de" (UID: "cdfe8d13-8537-4477-ae9e-5c9aa6e104de"). InnerVolumeSpecName "kube-api-access-qxxns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.387143 4979 scope.go:117] "RemoveContainer" containerID="6a656e436b19b339c0c277b8bbce77e23d12a120c342e1158752b1f56079e1d7" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.388005 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg" (OuterVolumeSpecName: "kube-api-access-txcpg") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "kube-api-access-txcpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.415184 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts" (OuterVolumeSpecName: "scripts") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.416667 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.417150 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460018 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460402 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs\") pod \"44df4390-d39d-42b7-904c-99d3e9680768\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460450 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle\") pod \"44df4390-d39d-42b7-904c-99d3e9680768\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460547 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs\") pod \"44df4390-d39d-42b7-904c-99d3e9680768\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460573 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8x65\" (UniqueName: \"kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65\") pod \"44df4390-d39d-42b7-904c-99d3e9680768\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460600 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460652 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460682 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460727 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460749 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460789 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460834 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460861 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9nbw\" (UniqueName: \"kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460895 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.460940 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461044 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461065 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461104 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461139 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461204 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv5tg\" (UniqueName: \"kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461221 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts\") pod \"b0baa205-eff4-4cad-a27f-db3599bba092\" (UID: \"b0baa205-eff4-4cad-a27f-db3599bba092\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461258 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle\") pod \"54d2662c-bd60-4a08-accd-e30f0a51518c\" (UID: \"54d2662c-bd60-4a08-accd-e30f0a51518c\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461305 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data\") pod \"44df4390-d39d-42b7-904c-99d3e9680768\" (UID: \"44df4390-d39d-42b7-904c-99d3e9680768\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461844 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461863 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461873 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zhrl\" (UniqueName: \"kubernetes.io/projected/c808d1a7-071b-4af7-b86d-adbc0e98803b-kube-api-access-4zhrl\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461886 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c466a98-f01c-49ab-841a-8f35c54e71f3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461896 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ae89cf4-f9f4-456b-947f-be87514b79ff-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461907 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxxns\" (UniqueName: \"kubernetes.io/projected/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-kube-api-access-qxxns\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461917 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c808d1a7-071b-4af7-b86d-adbc0e98803b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.461929 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txcpg\" (UniqueName: \"kubernetes.io/projected/3ae89cf4-f9f4-456b-947f-be87514b79ff-kube-api-access-txcpg\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.462271 4979 scope.go:117] "RemoveContainer" containerID="b5cd75c070f4563e5400007f2a3b5fc99f54b10f69882167ae699e694edff112" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.462589 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.464583 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs" (OuterVolumeSpecName: "logs") pod "44df4390-d39d-42b7-904c-99d3e9680768" (UID: "44df4390-d39d-42b7-904c-99d3e9680768"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.476246 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.477464 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs" (OuterVolumeSpecName: "logs") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.479893 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data" (OuterVolumeSpecName: "config-data") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.481715 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0121-account-create-update-cjfbd"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.486161 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs" (OuterVolumeSpecName: "logs") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.522119 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw" (OuterVolumeSpecName: "kube-api-access-r9nbw") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "kube-api-access-r9nbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.552162 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.563160 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564249 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564570 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564658 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df5k8\" (UniqueName: \"kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564730 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564842 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config\") pod \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.564961 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp9nn\" (UniqueName: \"kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn\") pod \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.565127 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.565211 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.565310 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts\") pod \"aec2e945-509e-4cbb-9988-9f6cc840cd62\" (UID: \"aec2e945-509e-4cbb-9988-9f6cc840cd62\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.565444 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle\") pod \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.565676 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs\") pod \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\" (UID: \"fe5eba1b-535d-4519-97c5-5e8b8f003d96\") " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.566698 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs" (OuterVolumeSpecName: "logs") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567778 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44df4390-d39d-42b7-904c-99d3e9680768-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567802 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567818 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567828 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9nbw\" (UniqueName: \"kubernetes.io/projected/b0baa205-eff4-4cad-a27f-db3599bba092-kube-api-access-r9nbw\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567838 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567848 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54d2662c-bd60-4a08-accd-e30f0a51518c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567862 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54d2662c-bd60-4a08-accd-e30f0a51518c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.567871 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0baa205-eff4-4cad-a27f-db3599bba092-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.568080 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-504c-account-create-update-wjh5g"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.570118 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.577226 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8" (OuterVolumeSpecName: "kube-api-access-df5k8") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "kube-api-access-df5k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.580106 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.580172 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.584999 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts" (OuterVolumeSpecName: "scripts") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.595306 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts" (OuterVolumeSpecName: "scripts") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.598305 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.605312 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65" (OuterVolumeSpecName: "kube-api-access-v8x65") pod "44df4390-d39d-42b7-904c-99d3e9680768" (UID: "44df4390-d39d-42b7-904c-99d3e9680768"). InnerVolumeSpecName "kube-api-access-v8x65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.643769 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg" (OuterVolumeSpecName: "kube-api-access-bv5tg") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "kube-api-access-bv5tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.748401 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts" (OuterVolumeSpecName: "scripts") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.748328 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn" (OuterVolumeSpecName: "kube-api-access-vp9nn") pod "fe5eba1b-535d-4519-97c5-5e8b8f003d96" (UID: "fe5eba1b-535d-4519-97c5-5e8b8f003d96"). InnerVolumeSpecName "kube-api-access-vp9nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760337 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760463 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760797 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df5k8\" (UniqueName: \"kubernetes.io/projected/aec2e945-509e-4cbb-9988-9f6cc840cd62-kube-api-access-df5k8\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760815 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp9nn\" (UniqueName: \"kubernetes.io/projected/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-api-access-vp9nn\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760851 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760873 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760888 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aec2e945-509e-4cbb-9988-9f6cc840cd62-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760908 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760921 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv5tg\" (UniqueName: \"kubernetes.io/projected/54d2662c-bd60-4a08-accd-e30f0a51518c-kube-api-access-bv5tg\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760933 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760958 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8x65\" (UniqueName: \"kubernetes.io/projected/44df4390-d39d-42b7-904c-99d3e9680768-kube-api-access-v8x65\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760971 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.760984 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.764655 4979 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.764805 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:43.764779064 +0000 UTC m=+1599.726026107 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : configmap "openstack-scripts" not found Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.765943 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.795211 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data" (OuterVolumeSpecName: "config-data") pod "cdfe8d13-8537-4477-ae9e-5c9aa6e104de" (UID: "cdfe8d13-8537-4477-ae9e-5c9aa6e104de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.795541 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cn72x for pod openstack/keystone-bb3f-account-create-update-f78xh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:42 crc kubenswrapper[4979]: E0130 22:06:42.795609 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:43.795581863 +0000 UTC m=+1599.756828896 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cn72x" (UniqueName: "kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.853391 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.863384 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.896395 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.914303 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.921882 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d511-account-create-update-gfm26"] Jan 30 22:06:42 crc kubenswrapper[4979]: I0130 22:06:42.968242 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.001225 4979 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 22:06:43 crc kubenswrapper[4979]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: if [ -n "" ]; then Jan 30 22:06:43 crc kubenswrapper[4979]: GRANT_DATABASE="" Jan 30 22:06:43 crc kubenswrapper[4979]: else Jan 30 22:06:43 crc kubenswrapper[4979]: GRANT_DATABASE="*" Jan 30 22:06:43 crc kubenswrapper[4979]: fi Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: # going for maximum compatibility here: Jan 30 22:06:43 crc kubenswrapper[4979]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 22:06:43 crc kubenswrapper[4979]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 22:06:43 crc kubenswrapper[4979]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 22:06:43 crc kubenswrapper[4979]: # support updates Jan 30 22:06:43 crc kubenswrapper[4979]: Jan 30 22:06:43 crc kubenswrapper[4979]: $MYSQL_CMD < logger="UnhandledError" Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.002994 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-czjz7" podUID="103e7f4c-fbf4-471c-9e8f-dbb281d59de1" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.006055 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.017200 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b6e4-account-create-update-6c4qp"] Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.053725 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.110830 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.116739 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdfe8d13-8537-4477-ae9e-5c9aa6e104de" (UID: "cdfe8d13-8537-4477-ae9e-5c9aa6e104de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.128071 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.137060 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.148573 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data" (OuterVolumeSpecName: "config-data") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.161102 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "fe5eba1b-535d-4519-97c5-5e8b8f003d96" (UID: "fe5eba1b-535d-4519-97c5-5e8b8f003d96"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.177310 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "44df4390-d39d-42b7-904c-99d3e9680768" (UID: "44df4390-d39d-42b7-904c-99d3e9680768"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.181673 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190756 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190804 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190821 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190842 4979 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190858 4979 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190873 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190890 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.190904 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe8d13-8537-4477-ae9e-5c9aa6e104de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.192355 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data" (OuterVolumeSpecName: "config-data") pod "44df4390-d39d-42b7-904c-99d3e9680768" (UID: "44df4390-d39d-42b7-904c-99d3e9680768"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.197896 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe5eba1b-535d-4519-97c5-5e8b8f003d96" (UID: "fe5eba1b-535d-4519-97c5-5e8b8f003d96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.203186 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21dfd874-e50d-4e61-a634-9f47ee92ff4f" path="/var/lib/kubelet/pods/21dfd874-e50d-4e61-a634-9f47ee92ff4f/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.204536 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae1b557-b27a-4331-8c91-bb1934e91fce" path="/var/lib/kubelet/pods/2ae1b557-b27a-4331-8c91-bb1934e91fce/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.209857 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4320dd9b-0e3c-474b-bb1a-e00a72ae2938" path="/var/lib/kubelet/pods/4320dd9b-0e3c-474b-bb1a-e00a72ae2938/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.212537 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d7f5965-9d27-4649-bb8f-9e99a57c0362" path="/var/lib/kubelet/pods/5d7f5965-9d27-4649-bb8f-9e99a57c0362/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.213270 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81fec9c6-beaa-4731-b527-51284f88fb92" path="/var/lib/kubelet/pods/81fec9c6-beaa-4731-b527-51284f88fb92/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.216512 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8573fb5d-0536-4182-95b7-f8d0a16ce994" path="/var/lib/kubelet/pods/8573fb5d-0536-4182-95b7-f8d0a16ce994/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.218462 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94177def-b41a-4af1-bcce-a0673da9f81c" path="/var/lib/kubelet/pods/94177def-b41a-4af1-bcce-a0673da9f81c/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.219523 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9686aad4-f2a7-4878-ae8b-f6142e93703a" path="/var/lib/kubelet/pods/9686aad4-f2a7-4878-ae8b-f6142e93703a/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.220622 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0f67cef-fc43-42c0-967e-d51d1730b419" path="/var/lib/kubelet/pods/b0f67cef-fc43-42c0-967e-d51d1730b419/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.221239 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" path="/var/lib/kubelet/pods/b4e29508-bcd2-4f07-807c-dde529c4fa24/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.222707 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04339fa-9eb7-4671-895b-ef768888add0" path="/var/lib/kubelet/pods/c04339fa-9eb7-4671-895b-ef768888add0/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.223760 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4fc1eef-47e7-4fdd-9642-da7ce95056e8" path="/var/lib/kubelet/pods/d4fc1eef-47e7-4fdd-9642-da7ce95056e8/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.224487 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7" path="/var/lib/kubelet/pods/e1f5c9b2-c0b5-4327-8e50-3c6e4cb153a7/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.225020 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f69eed38-4641-4703-8a87-93aedebfbff1" path="/var/lib/kubelet/pods/f69eed38-4641-4703-8a87-93aedebfbff1/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.228331 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac7007d-8147-477c-a42e-2463290030ff" path="/var/lib/kubelet/pods/fac7007d-8147-477c-a42e-2463290030ff/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.229555 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe035ddd-73a5-43fd-8b1d-343447e1f850" path="/var/lib/kubelet/pods/fe035ddd-73a5-43fd-8b1d-343447e1f850/volumes" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.231196 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.232500 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.232832 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65c8fcd6dc-l7v2f" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.234378 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.234478 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.234729 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.234782 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.234998 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5574d874bd-cg256" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.235317 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.235729 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cd6984846-6pk8x" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.243852 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44df4390-d39d-42b7-904c-99d3e9680768" (UID: "44df4390-d39d-42b7-904c-99d3e9680768"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.266063 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data" (OuterVolumeSpecName: "config-data") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.289175 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "fe5eba1b-535d-4519-97c5-5e8b8f003d96" (UID: "fe5eba1b-535d-4519-97c5-5e8b8f003d96"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.292814 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.292849 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.292860 4979 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.292874 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5eba1b-535d-4519-97c5-5e8b8f003d96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.292885 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44df4390-d39d-42b7-904c-99d3e9680768-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.293241 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.301574 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3ae89cf4-f9f4-456b-947f-be87514b79ff" (UID: "3ae89cf4-f9f4-456b-947f-be87514b79ff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.332522 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.374787 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.394924 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ae89cf4-f9f4-456b-947f-be87514b79ff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.395015 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.395041 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.395051 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.405043 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.430260 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.442217 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data" (OuterVolumeSpecName: "config-data") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.446963 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54d2662c-bd60-4a08-accd-e30f0a51518c" (UID: "54d2662c-bd60-4a08-accd-e30f0a51518c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.447363 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.451749 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aec2e945-509e-4cbb-9988-9f6cc840cd62" (UID: "aec2e945-509e-4cbb-9988-9f6cc840cd62"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499146 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499233 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499248 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499261 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499273 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec2e945-509e-4cbb-9988-9f6cc840cd62-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.499309 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d2662c-bd60-4a08-accd-e30f0a51518c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.503401 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c808d1a7-071b-4af7-b86d-adbc0e98803b" (UID: "c808d1a7-071b-4af7-b86d-adbc0e98803b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.544258 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data" (OuterVolumeSpecName: "config-data") pod "b0baa205-eff4-4cad-a27f-db3599bba092" (UID: "b0baa205-eff4-4cad-a27f-db3599bba092"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.604006 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0baa205-eff4-4cad-a27f-db3599bba092-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.604531 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c808d1a7-071b-4af7-b86d-adbc0e98803b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.714166 4979 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 30 22:06:43 crc kubenswrapper[4979]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-30T22:06:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 22:06:43 crc kubenswrapper[4979]: /etc/init.d/functions: line 589: 477 Alarm clock "$@" Jan 30 22:06:43 crc kubenswrapper[4979]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-kxk8g" message=< Jan 30 22:06:43 crc kubenswrapper[4979]: Exiting ovn-controller (1) [FAILED] Jan 30 22:06:43 crc kubenswrapper[4979]: Killing ovn-controller (1) [ OK ] Jan 30 22:06:43 crc kubenswrapper[4979]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 30 22:06:43 crc kubenswrapper[4979]: 2026-01-30T22:06:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 22:06:43 crc kubenswrapper[4979]: /etc/init.d/functions: line 589: 477 Alarm clock "$@" Jan 30 22:06:43 crc kubenswrapper[4979]: > Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.714259 4979 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 30 22:06:43 crc kubenswrapper[4979]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-30T22:06:36Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 22:06:43 crc kubenswrapper[4979]: /etc/init.d/functions: line 589: 477 Alarm clock "$@" Jan 30 22:06:43 crc kubenswrapper[4979]: > pod="openstack/ovn-controller-kxk8g" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" containerID="cri-o://2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.714318 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-kxk8g" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" containerID="cri-o://2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9" gracePeriod=21 Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.824699 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-czjz7" event={"ID":"103e7f4c-fbf4-471c-9e8f-dbb281d59de1","Type":"ContainerStarted","Data":"335f4b094e47edce7c0b5be42fdbe6f236f4c3629392ba85436681dc5052e8e7"} Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.824743 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"981f1fee-4d2a-4d80-bf38-80557b6c5033","Type":"ContainerDied","Data":"b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff"} Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.824762 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ebf6137f8f3321300579002f1760a8ba9a97e5b03ab3c25ec19ac9cb4798ff" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.825916 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.825968 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.826259 4979 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.826349 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:45.826328719 +0000 UTC m=+1601.787575752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : configmap "openstack-scripts" not found Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.836451 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cn72x for pod openstack/keystone-bb3f-account-create-update-f78xh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:43 crc kubenswrapper[4979]: E0130 22:06:43.836562 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:45.836534503 +0000 UTC m=+1601.797781536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cn72x" (UniqueName: "kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.895594 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.927905 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.927982 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8t7j\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928023 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928086 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928123 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928159 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928215 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928310 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928337 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928390 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.928487 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data\") pod \"981f1fee-4d2a-4d80-bf38-80557b6c5033\" (UID: \"981f1fee-4d2a-4d80-bf38-80557b6c5033\") " Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.934510 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.937831 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.943276 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info" (OuterVolumeSpecName: "pod-info") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.944342 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.955779 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.957241 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.959664 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j" (OuterVolumeSpecName: "kube-api-access-h8t7j") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "kube-api-access-h8t7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.961310 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:43 crc kubenswrapper[4979]: I0130 22:06:43.989713 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data" (OuterVolumeSpecName: "config-data") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.006823 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf" (OuterVolumeSpecName: "server-conf") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040350 4979 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/981f1fee-4d2a-4d80-bf38-80557b6c5033-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040397 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040408 4979 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040416 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040426 4979 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/981f1fee-4d2a-4d80-bf38-80557b6c5033-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040438 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8t7j\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-kube-api-access-h8t7j\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040473 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040484 4979 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/981f1fee-4d2a-4d80-bf38-80557b6c5033-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040496 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.040506 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.128952 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "981f1fee-4d2a-4d80-bf38-80557b6c5033" (UID: "981f1fee-4d2a-4d80-bf38-80557b6c5033"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.143861 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/981f1fee-4d2a-4d80-bf38-80557b6c5033-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.198784 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.214745 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.227547 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.231689 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918 is running failed: container process not found" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.234215 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.234400 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918 is running failed: container process not found" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.235106 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918 is running failed: container process not found" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.235148 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.247951 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.272701 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.291136 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kxk8g_5e0b30c9-4972-4476-90e8-eec8d5d44ce5/ovn-controller/0.log" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.291244 4979 generic.go:334] "Generic (PLEG): container finished" podID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerID="2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9" exitCode=137 Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.291417 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g" event={"ID":"5e0b30c9-4972-4476-90e8-eec8d5d44ce5","Type":"ContainerDied","Data":"2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9"} Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.294488 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-65c8fcd6dc-l7v2f"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.295597 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.319867 4979 generic.go:334] "Generic (PLEG): container finished" podID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerID="eb730deff98069b37c5aef76211404c3781f41d8e0443df163b818199c423131" exitCode=0 Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.320229 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerDied","Data":"eb730deff98069b37c5aef76211404c3781f41d8e0443df163b818199c423131"} Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.320366 4979 scope.go:117] "RemoveContainer" containerID="eb730deff98069b37c5aef76211404c3781f41d8e0443df163b818199c423131" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.325014 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.334850 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e7cc7cf6-3592-4e25-9578-27ae56d6909b/ovn-northd/0.log" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.335490 4979 generic.go:334] "Generic (PLEG): container finished" podID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerID="e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" exitCode=139 Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.335626 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerDied","Data":"e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6"} Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.348880 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.349373 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.349610 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.349702 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.349808 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.349910 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.350010 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.350109 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.350230 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.350300 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7qvl\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.350420 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.352440 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.354787 4979 generic.go:334] "Generic (PLEG): container finished" podID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" containerID="11167d299d7103f588d853413dc7b7095145b87d82239c5f576cb6d82dbfce8a" exitCode=0 Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.354932 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d","Type":"ContainerDied","Data":"11167d299d7103f588d853413dc7b7095145b87d82239c5f576cb6d82dbfce8a"} Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.367394 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.373397 4979 scope.go:117] "RemoveContainer" containerID="d23312f80a962608adf95395e957ee6134bf402e8fc2a1db6e478f01ef1ed902" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.373446 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.388196 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.388594 4979 generic.go:334] "Generic (PLEG): container finished" podID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" exitCode=0 Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.388838 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.389394 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.389489 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2f627a1e-42e6-4af6-90f1-750c01bcf076","Type":"ContainerDied","Data":"d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918"} Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.390474 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info" (OuterVolumeSpecName: "pod-info") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.390661 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.397593 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.403367 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.406266 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl" (OuterVolumeSpecName: "kube-api-access-n7qvl") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "kube-api-access-n7qvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.406368 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5574d874bd-cg256"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.407273 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data" (OuterVolumeSpecName: "config-data") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.413346 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.420015 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.431487 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.442561 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.452385 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf" (OuterVolumeSpecName: "server-conf") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.452576 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") pod \"e28a1e34-b97c-4090-adf8-fa3e2b766365\" (UID: \"e28a1e34-b97c-4090-adf8-fa3e2b766365\") " Jan 30 22:06:44 crc kubenswrapper[4979]: W0130 22:06:44.452675 4979 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e28a1e34-b97c-4090-adf8-fa3e2b766365/volumes/kubernetes.io~configmap/server-conf Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.452687 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf" (OuterVolumeSpecName: "server-conf") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453400 4979 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453433 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453450 4979 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e28a1e34-b97c-4090-adf8-fa3e2b766365-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453478 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453492 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453505 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453516 4979 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e28a1e34-b97c-4090-adf8-fa3e2b766365-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453530 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453540 4979 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e28a1e34-b97c-4090-adf8-fa3e2b766365-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453552 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7qvl\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-kube-api-access-n7qvl\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.453769 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.460283 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6cd6984846-6pk8x"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.475966 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.478770 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.488993 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.507794 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.513154 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.517303 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e28a1e34-b97c-4090-adf8-fa3e2b766365" (UID: "e28a1e34-b97c-4090-adf8-fa3e2b766365"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.519772 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.528804 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.537870 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.540383 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.554728 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6sfh\" (UniqueName: \"kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh\") pod \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.554798 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts\") pod \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\" (UID: \"103e7f4c-fbf4-471c-9e8f-dbb281d59de1\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.555548 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e28a1e34-b97c-4090-adf8-fa3e2b766365-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.555568 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.556098 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "103e7f4c-fbf4-471c-9e8f-dbb281d59de1" (UID: "103e7f4c-fbf4-471c-9e8f-dbb281d59de1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.584519 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh" (OuterVolumeSpecName: "kube-api-access-k6sfh") pod "103e7f4c-fbf4-471c-9e8f-dbb281d59de1" (UID: "103e7f4c-fbf4-471c-9e8f-dbb281d59de1"). InnerVolumeSpecName "kube-api-access-k6sfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.665757 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mk5r\" (UniqueName: \"kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r\") pod \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.665923 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle\") pod \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.665972 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config\") pod \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.666011 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs\") pod \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.666096 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data\") pod \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\" (UID: \"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d\") " Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.666613 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.666630 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6sfh\" (UniqueName: \"kubernetes.io/projected/103e7f4c-fbf4-471c-9e8f-dbb281d59de1-kube-api-access-k6sfh\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.673618 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data" (OuterVolumeSpecName: "config-data") pod "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" (UID: "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.681970 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" (UID: "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.687280 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r" (OuterVolumeSpecName: "kube-api-access-4mk5r") pod "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" (UID: "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d"). InnerVolumeSpecName "kube-api-access-4mk5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.738418 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" (UID: "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.768900 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.768973 4979 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.768986 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.769004 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mk5r\" (UniqueName: \"kubernetes.io/projected/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-kube-api-access-4mk5r\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.796200 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" (UID: "a84d49b8-94bf-46c5-9ca4-eeac44df1d4d"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:44 crc kubenswrapper[4979]: I0130 22:06:44.871469 4979 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.912542 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962 is running failed: container process not found" containerID="62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.913164 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962 is running failed: container process not found" containerID="62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.913961 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962 is running failed: container process not found" containerID="62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 22:06:44 crc kubenswrapper[4979]: E0130 22:06:44.914084 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="galera" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.006243 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.026128 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.172:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.026300 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6d7cdf56b7-lf2dc" podUID="b4e29508-bcd2-4f07-807c-dde529c4fa24" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.172:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.030148 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kxk8g_5e0b30c9-4972-4476-90e8-eec8d5d44ce5/ovn-controller/0.log" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.030417 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.033909 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e7cc7cf6-3592-4e25-9578-27ae56d6909b/ovn-northd/0.log" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.034711 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078365 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078511 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078579 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078619 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle\") pod \"2f627a1e-42e6-4af6-90f1-750c01bcf076\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078651 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dn9c\" (UniqueName: \"kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078788 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078847 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.078918 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079002 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079125 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgffv\" (UniqueName: \"kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv\") pod \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\" (UID: \"5e0b30c9-4972-4476-90e8-eec8d5d44ce5\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079169 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079299 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079358 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data\") pod \"2f627a1e-42e6-4af6-90f1-750c01bcf076\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079411 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079457 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcsgr\" (UniqueName: \"kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr\") pod \"2f627a1e-42e6-4af6-90f1-750c01bcf076\" (UID: \"2f627a1e-42e6-4af6-90f1-750c01bcf076\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.079525 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs\") pod \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\" (UID: \"e7cc7cf6-3592-4e25-9578-27ae56d6909b\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.080271 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.081413 4979 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.087134 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts" (OuterVolumeSpecName: "scripts") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.088157 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config" (OuterVolumeSpecName: "config") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.088442 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.106519 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run" (OuterVolumeSpecName: "var-run") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.121835 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" path="/var/lib/kubelet/pods/3ae89cf4-f9f4-456b-947f-be87514b79ff/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.122764 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44df4390-d39d-42b7-904c-99d3e9680768" path="/var/lib/kubelet/pods/44df4390-d39d-42b7-904c-99d3e9680768/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.123624 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" path="/var/lib/kubelet/pods/54d2662c-bd60-4a08-accd-e30f0a51518c/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.125760 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.130804 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv" (OuterVolumeSpecName: "kube-api-access-mgffv") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "kube-api-access-mgffv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.133804 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" path="/var/lib/kubelet/pods/5c466a98-f01c-49ab-841a-8f35c54e71f3/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.134947 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" path="/var/lib/kubelet/pods/981f1fee-4d2a-4d80-bf38-80557b6c5033/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.138169 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr" (OuterVolumeSpecName: "kube-api-access-vcsgr") pod "2f627a1e-42e6-4af6-90f1-750c01bcf076" (UID: "2f627a1e-42e6-4af6-90f1-750c01bcf076"). InnerVolumeSpecName "kube-api-access-vcsgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.138470 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts" (OuterVolumeSpecName: "scripts") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.139578 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c" (OuterVolumeSpecName: "kube-api-access-5dn9c") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "kube-api-access-5dn9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.152245 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" path="/var/lib/kubelet/pods/aec2e945-509e-4cbb-9988-9f6cc840cd62/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.153117 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" path="/var/lib/kubelet/pods/b0baa205-eff4-4cad-a27f-db3599bba092/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.155567 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" path="/var/lib/kubelet/pods/c808d1a7-071b-4af7-b86d-adbc0e98803b/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.175687 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" path="/var/lib/kubelet/pods/cdfe8d13-8537-4477-ae9e-5c9aa6e104de/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.176416 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" path="/var/lib/kubelet/pods/fe5eba1b-535d-4519-97c5-5e8b8f003d96/volumes" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.199287 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data" (OuterVolumeSpecName: "config-data") pod "2f627a1e-42e6-4af6-90f1-750c01bcf076" (UID: "2f627a1e-42e6-4af6-90f1-750c01bcf076"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.199969 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcsgr\" (UniqueName: \"kubernetes.io/projected/2f627a1e-42e6-4af6-90f1-750c01bcf076-kube-api-access-vcsgr\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200003 4979 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200018 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dn9c\" (UniqueName: \"kubernetes.io/projected/e7cc7cf6-3592-4e25-9578-27ae56d6909b-kube-api-access-5dn9c\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200060 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200071 4979 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200082 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200091 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200100 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgffv\" (UniqueName: \"kubernetes.io/projected/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-kube-api-access-mgffv\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200111 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7cc7cf6-3592-4e25-9578-27ae56d6909b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.200185 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.209073 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f627a1e-42e6-4af6-90f1-750c01bcf076" (UID: "2f627a1e-42e6-4af6-90f1-750c01bcf076"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.229513 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.236949 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.262629 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.281402 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "5e0b30c9-4972-4476-90e8-eec8d5d44ce5" (UID: "5e0b30c9-4972-4476-90e8-eec8d5d44ce5"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.289226 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e7cc7cf6-3592-4e25-9578-27ae56d6909b" (UID: "e7cc7cf6-3592-4e25-9578-27ae56d6909b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301653 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301689 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b30c9-4972-4476-90e8-eec8d5d44ce5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301702 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f627a1e-42e6-4af6-90f1-750c01bcf076-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301717 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301730 4979 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.301744 4979 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7cc7cf6-3592-4e25-9578-27ae56d6909b-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.421842 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.422554 4979 generic.go:334] "Generic (PLEG): container finished" podID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerID="94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a" exitCode=0 Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.422704 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerDied","Data":"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.422747 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ccc5789d5-9fbcz" event={"ID":"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd","Type":"ContainerDied","Data":"ca8441f7e30661b52f9821e4f8bade797db77f1bc59f74f658c35d0b1cade61a"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.422770 4979 scope.go:117] "RemoveContainer" containerID="cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.437944 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-czjz7" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.438289 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-czjz7" event={"ID":"103e7f4c-fbf4-471c-9e8f-dbb281d59de1","Type":"ContainerDied","Data":"335f4b094e47edce7c0b5be42fdbe6f236f4c3629392ba85436681dc5052e8e7"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.450547 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kxk8g_5e0b30c9-4972-4476-90e8-eec8d5d44ce5/ovn-controller/0.log" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.450817 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxk8g" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.451074 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxk8g" event={"ID":"5e0b30c9-4972-4476-90e8-eec8d5d44ce5","Type":"ContainerDied","Data":"96db0ca5fc664494edd55a8a9e353913c559045aaf6936b24c262a6f00efc265"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.454385 4979 generic.go:334] "Generic (PLEG): container finished" podID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerID="62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" exitCode=0 Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.454441 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerDied","Data":"62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.456018 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2f627a1e-42e6-4af6-90f1-750c01bcf076","Type":"ContainerDied","Data":"a0de9700bb7fcf5a664741b82e8a5660815e5d09636e24070c5df5ee3f5b2854"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.456153 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.458474 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e7cc7cf6-3592-4e25-9578-27ae56d6909b/ovn-northd/0.log" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.459169 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e7cc7cf6-3592-4e25-9578-27ae56d6909b","Type":"ContainerDied","Data":"2bd740bd191cb301e1ace5a3abcf92c5ccb570c941fcbb8171a41eb9fdac51bb"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.459380 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.460779 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e28a1e34-b97c-4090-adf8-fa3e2b766365","Type":"ContainerDied","Data":"07a49cceb74489142f70c5e54b77a1260f27b6febbad8e29043ec778ce1e05b1"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.460938 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.469669 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a84d49b8-94bf-46c5-9ca4-eeac44df1d4d","Type":"ContainerDied","Data":"bef9626e17c775699e3abae85cd19e88917b71194c8acdd56a70c42320faed2f"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.469785 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.484010 4979 generic.go:334] "Generic (PLEG): container finished" podID="93c29874-a63d-4d35-a1a6-256d811ac6f8" containerID="dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db" exitCode=0 Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.484225 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.484476 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5778c484-5rg8p" event={"ID":"93c29874-a63d-4d35-a1a6-256d811ac6f8","Type":"ContainerDied","Data":"dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db"} Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504422 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504561 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504597 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504627 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504671 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504712 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb7gb\" (UniqueName: \"kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.504821 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config\") pod \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\" (UID: \"c8cc63f5-501a-4bd5-962b-a1f218fbbcdd\") " Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.542378 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb" (OuterVolumeSpecName: "kube-api-access-sb7gb") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "kube-api-access-sb7gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.549817 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.574991 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.606931 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.606954 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.606964 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb7gb\" (UniqueName: \"kubernetes.io/projected/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-kube-api-access-sb7gb\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.609916 4979 scope.go:117] "RemoveContainer" containerID="94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.610743 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.620931 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.664123 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.667179 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-czjz7"] Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.671276 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.683863 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config" (OuterVolumeSpecName: "config") pod "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" (UID: "c8cc63f5-501a-4bd5-962b-a1f218fbbcdd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.694025 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.699524 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.709322 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.709357 4979 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.709372 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.709381 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:45 crc kubenswrapper[4979]: E0130 22:06:45.906957 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c29874_a63d_4d35_a1a6_256d811ac6f8.slice/crio-dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.914175 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:45 crc kubenswrapper[4979]: I0130 22:06:45.914228 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") pod \"keystone-bb3f-account-create-update-f78xh\" (UID: \"ba12ac60-82de-4c7b-9411-4f36b0aedf3b\") " pod="openstack/keystone-bb3f-account-create-update-f78xh" Jan 30 22:06:45 crc kubenswrapper[4979]: E0130 22:06:45.914368 4979 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 22:06:45 crc kubenswrapper[4979]: E0130 22:06:45.914432 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:49.914411105 +0000 UTC m=+1605.875658148 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : configmap "openstack-scripts" not found Jan 30 22:06:45 crc kubenswrapper[4979]: E0130 22:06:45.919241 4979 projected.go:194] Error preparing data for projected volume kube-api-access-cn72x for pod openstack/keystone-bb3f-account-create-update-f78xh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:45 crc kubenswrapper[4979]: E0130 22:06:45.919293 4979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x podName:ba12ac60-82de-4c7b-9411-4f36b0aedf3b nodeName:}" failed. No retries permitted until 2026-01-30 22:06:49.919280196 +0000 UTC m=+1605.880527229 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cn72x" (UniqueName: "kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x") pod "keystone-bb3f-account-create-update-f78xh" (UID: "ba12ac60-82de-4c7b-9411-4f36b0aedf3b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.097363 4979 scope.go:117] "RemoveContainer" containerID="cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.100460 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260\": container with ID starting with cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260 not found: ID does not exist" containerID="cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.100526 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260"} err="failed to get container status \"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260\": rpc error: code = NotFound desc = could not find container \"cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260\": container with ID starting with cef8665c26ce5460f894ec9497e9fe1d6ffc3788e1cd08a1815096a3cbf02260 not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.100562 4979 scope.go:117] "RemoveContainer" containerID="94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.101126 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a\": container with ID starting with 94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a not found: ID does not exist" containerID="94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.101155 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a"} err="failed to get container status \"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a\": rpc error: code = NotFound desc = could not find container \"94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a\": container with ID starting with 94f6f88a1731dcdea24a21e4f4e22f0f2622399be62ec45c6a3cbd5c84b3c56a not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.101179 4979 scope.go:117] "RemoveContainer" containerID="2f99585a0b5724b1ae341c2bb5598dd9878e0e62705a08aa07e6569ea6c20dc9" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.101211 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.116551 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.184238 4979 scope.go:117] "RemoveContainer" containerID="d4fed2e0674072935d82f9dccc8fc8883e84d307d3935c1f1937a392d7eac918" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.208708 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.214651 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.235718 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.235813 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plqql\" (UniqueName: \"kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.235865 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.235904 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.235944 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236004 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236067 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236107 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236152 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236184 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqqfg\" (UniqueName: \"kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236221 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236246 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236289 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236331 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236367 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs\") pod \"6795c6d5-6bb8-432f-b7ca-f29f33298093\" (UID: \"6795c6d5-6bb8-432f-b7ca-f29f33298093\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.236399 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle\") pod \"93c29874-a63d-4d35-a1a6-256d811ac6f8\" (UID: \"93c29874-a63d-4d35-a1a6-256d811ac6f8\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.262428 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.266589 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.272330 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.274779 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.281312 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.291973 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.294051 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.295606 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.299657 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql" (OuterVolumeSpecName: "kube-api-access-plqql") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "kube-api-access-plqql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.300189 4979 scope.go:117] "RemoveContainer" containerID="e204d1dc5e7fa115beba02cf6b2cea66e47fc3000fc462300bc76d2f7b2461f6" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.303164 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts" (OuterVolumeSpecName: "scripts") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.316332 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg" (OuterVolumeSpecName: "kube-api-access-rqqfg") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "kube-api-access-rqqfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.322448 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.324170 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.326978 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343748 4979 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343818 4979 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343834 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343848 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqqfg\" (UniqueName: \"kubernetes.io/projected/6795c6d5-6bb8-432f-b7ca-f29f33298093-kube-api-access-rqqfg\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343890 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6795c6d5-6bb8-432f-b7ca-f29f33298093-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343905 4979 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343918 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343930 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343941 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plqql\" (UniqueName: \"kubernetes.io/projected/93c29874-a63d-4d35-a1a6-256d811ac6f8-kube-api-access-plqql\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.343980 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6795c6d5-6bb8-432f-b7ca-f29f33298093-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.349999 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.358762 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kxk8g"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.361783 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.367905 4979 scope.go:117] "RemoveContainer" containerID="80763810cb3d21dbcce7752b095be501d4710e63b0bd5bbd6940f8072de72cd1" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.379106 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bb3f-account-create-update-f78xh"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.387873 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bb3f-account-create-update-f78xh"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.393234 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data" (OuterVolumeSpecName: "config-data") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.394672 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.399144 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.404673 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "93c29874-a63d-4d35-a1a6-256d811ac6f8" (UID: "93c29874-a63d-4d35-a1a6-256d811ac6f8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.409660 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "6795c6d5-6bb8-432f-b7ca-f29f33298093" (UID: "6795c6d5-6bb8-432f-b7ca-f29f33298093"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.410608 4979 scope.go:117] "RemoveContainer" containerID="11167d299d7103f588d853413dc7b7095145b87d82239c5f576cb6d82dbfce8a" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.415509 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445056 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445122 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445180 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445253 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445279 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xc79\" (UniqueName: \"kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445327 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445460 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445502 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle\") pod \"3b34adef-df84-42dd-a052-5e543c4182b5\" (UID: \"3b34adef-df84-42dd-a052-5e543c4182b5\") " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445869 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn72x\" (UniqueName: \"kubernetes.io/projected/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-kube-api-access-cn72x\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445898 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445910 4979 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445920 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba12ac60-82de-4c7b-9411-4f36b0aedf3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445931 4979 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445942 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6795c6d5-6bb8-432f-b7ca-f29f33298093-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445951 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.445960 4979 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93c29874-a63d-4d35-a1a6-256d811ac6f8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.446496 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.446810 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.450532 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts" (OuterVolumeSpecName: "scripts") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.450675 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79" (OuterVolumeSpecName: "kube-api-access-7xc79") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "kube-api-access-7xc79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.468151 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.480005 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.499812 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.513918 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ccc5789d5-9fbcz" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.516918 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.521415 4979 generic.go:334] "Generic (PLEG): container finished" podID="3b34adef-df84-42dd-a052-5e543c4182b5" containerID="b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c" exitCode=0 Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.521506 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.521519 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerDied","Data":"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c"} Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.521700 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b34adef-df84-42dd-a052-5e543c4182b5","Type":"ContainerDied","Data":"1544871f33799c3038bca6a1237524bb73b783f1c5406b279be53a7e8d66904e"} Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.521749 4979 scope.go:117] "RemoveContainer" containerID="93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.525335 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5778c484-5rg8p" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.525345 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5778c484-5rg8p" event={"ID":"93c29874-a63d-4d35-a1a6-256d811ac6f8","Type":"ContainerDied","Data":"3e9edd35208d792f51f192e25b79d4b0f4b1e176ef66384b0abd50fdfae09711"} Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.532435 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6795c6d5-6bb8-432f-b7ca-f29f33298093","Type":"ContainerDied","Data":"78ea57414491f2323050c139427e26db676dbcbe77ee157ba12f1a06c2d26416"} Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.532567 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555787 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555825 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555836 4979 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555847 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555859 4979 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555869 4979 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b34adef-df84-42dd-a052-5e543c4182b5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555878 4979 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.555888 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xc79\" (UniqueName: \"kubernetes.io/projected/3b34adef-df84-42dd-a052-5e543c4182b5-kube-api-access-7xc79\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.561020 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.565316 4979 scope.go:117] "RemoveContainer" containerID="fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.567267 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ccc5789d5-9fbcz"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.570273 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data" (OuterVolumeSpecName: "config-data") pod "3b34adef-df84-42dd-a052-5e543c4182b5" (UID: "3b34adef-df84-42dd-a052-5e543c4182b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.595542 4979 scope.go:117] "RemoveContainer" containerID="b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.600831 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.618699 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f5778c484-5rg8p"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.627318 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.628826 4979 scope.go:117] "RemoveContainer" containerID="5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.638684 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.659797 4979 scope.go:117] "RemoveContainer" containerID="93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.660522 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed\": container with ID starting with 93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed not found: ID does not exist" containerID="93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.660570 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed"} err="failed to get container status \"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed\": rpc error: code = NotFound desc = could not find container \"93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed\": container with ID starting with 93a6801b37d4506db0bc8ba3a5e41e9d556324e8efbdfa95ac206db99d7460ed not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.660606 4979 scope.go:117] "RemoveContainer" containerID="fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.660896 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511\": container with ID starting with fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511 not found: ID does not exist" containerID="fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.660930 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511"} err="failed to get container status \"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511\": rpc error: code = NotFound desc = could not find container \"fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511\": container with ID starting with fd1df058e51bc040cef062b0361a1864db8920aada98bd5528292eee4f0d4511 not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.660942 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b34adef-df84-42dd-a052-5e543c4182b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.660951 4979 scope.go:117] "RemoveContainer" containerID="b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.662660 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c\": container with ID starting with b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c not found: ID does not exist" containerID="b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.662710 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c"} err="failed to get container status \"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c\": rpc error: code = NotFound desc = could not find container \"b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c\": container with ID starting with b33d7e6e4b1a061bb81d49c97677efb14eba8862a750551a1acf21df1aec070c not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.662753 4979 scope.go:117] "RemoveContainer" containerID="5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.663258 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267\": container with ID starting with 5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267 not found: ID does not exist" containerID="5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.663338 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267"} err="failed to get container status \"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267\": rpc error: code = NotFound desc = could not find container \"5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267\": container with ID starting with 5ca4433ebc469d7ca82e6fa0f5cc2b5902dbe77bd972d7c5ad894bdc2ec01267 not found: ID does not exist" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.663390 4979 scope.go:117] "RemoveContainer" containerID="dc00335b3349ed9094fcb23ca1c7d69e4482f30a798683dca97095cbf88e35db" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.694489 4979 scope.go:117] "RemoveContainer" containerID="62ff7b85370354278a808f7ae11223c1875f6cc6f9e96f521ca16c69655fd962" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.699995 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.700012 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.700692 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.701069 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.701113 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.702698 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.716170 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:46 crc kubenswrapper[4979]: E0130 22:06:46.716249 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.745384 4979 scope.go:117] "RemoveContainer" containerID="c95e9571ab3d28e43a0c69cdf9503d7a855b5db4e2dc8986089e4c89a9a844d2" Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.862288 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:06:46 crc kubenswrapper[4979]: I0130 22:06:46.874991 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.080199 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="103e7f4c-fbf4-471c-9e8f-dbb281d59de1" path="/var/lib/kubelet/pods/103e7f4c-fbf4-471c-9e8f-dbb281d59de1/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.080647 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" path="/var/lib/kubelet/pods/2f627a1e-42e6-4af6-90f1-750c01bcf076/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.081318 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" path="/var/lib/kubelet/pods/3b34adef-df84-42dd-a052-5e543c4182b5/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.084164 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" path="/var/lib/kubelet/pods/5e0b30c9-4972-4476-90e8-eec8d5d44ce5/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.085797 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" path="/var/lib/kubelet/pods/6795c6d5-6bb8-432f-b7ca-f29f33298093/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.086466 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c29874-a63d-4d35-a1a6-256d811ac6f8" path="/var/lib/kubelet/pods/93c29874-a63d-4d35-a1a6-256d811ac6f8/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.087690 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" path="/var/lib/kubelet/pods/a84d49b8-94bf-46c5-9ca4-eeac44df1d4d/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.089020 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba12ac60-82de-4c7b-9411-4f36b0aedf3b" path="/var/lib/kubelet/pods/ba12ac60-82de-4c7b-9411-4f36b0aedf3b/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.089334 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" path="/var/lib/kubelet/pods/c8cc63f5-501a-4bd5-962b-a1f218fbbcdd/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.090555 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" path="/var/lib/kubelet/pods/e28a1e34-b97c-4090-adf8-fa3e2b766365/volumes" Jan 30 22:06:47 crc kubenswrapper[4979]: I0130 22:06:47.091224 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" path="/var/lib/kubelet/pods/e7cc7cf6-3592-4e25-9578-27ae56d6909b/volumes" Jan 30 22:06:48 crc kubenswrapper[4979]: I0130 22:06:48.599350 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: i/o timeout" Jan 30 22:06:48 crc kubenswrapper[4979]: I0130 22:06:48.670077 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: i/o timeout" Jan 30 22:06:51 crc kubenswrapper[4979]: I0130 22:06:51.070358 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.070916 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.699183 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.699708 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.700243 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.701025 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.701071 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.701901 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.710634 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:51 crc kubenswrapper[4979]: E0130 22:06:51.710736 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.698944 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.699906 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.700238 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.700349 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.700668 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.702193 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.708208 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:06:56 crc kubenswrapper[4979]: E0130 22:06:56.708296 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.699180 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.700465 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.700983 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.701090 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.701098 4979 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.702674 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.704095 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 22:07:01 crc kubenswrapper[4979]: E0130 22:07:01.704146 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-tmjt2" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:07:04 crc kubenswrapper[4979]: I0130 22:07:04.070691 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:07:04 crc kubenswrapper[4979]: E0130 22:07:04.070978 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:07:04 crc kubenswrapper[4979]: I0130 22:07:04.771287 4979 generic.go:334] "Generic (PLEG): container finished" podID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerID="453f3cdac4ea155af06a1a316c55ca43062a6082a47aacfa7561eb05a7b482b3" exitCode=137 Jan 30 22:07:04 crc kubenswrapper[4979]: I0130 22:07:04.771517 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"453f3cdac4ea155af06a1a316c55ca43062a6082a47aacfa7561eb05a7b482b3"} Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.427112 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.521515 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.522047 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.522153 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.522212 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.522245 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.522293 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28trk\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk\") pod \"3258ad4a-d940-41c3-b875-afadfcc317d4\" (UID: \"3258ad4a-d940-41c3-b875-afadfcc317d4\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.524191 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache" (OuterVolumeSpecName: "cache") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.524887 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock" (OuterVolumeSpecName: "lock") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.536893 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk" (OuterVolumeSpecName: "kube-api-access-28trk") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "kube-api-access-28trk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.536966 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.540888 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "swift") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.624853 4979 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.624896 4979 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-cache\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.624910 4979 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3258ad4a-d940-41c3-b875-afadfcc317d4-lock\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.624950 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.624969 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28trk\" (UniqueName: \"kubernetes.io/projected/3258ad4a-d940-41c3-b875-afadfcc317d4-kube-api-access-28trk\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.643850 4979 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.648208 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tmjt2_6ed4b9c3-3a9b-4c60-a68b-046cf5288e88/ovs-vswitchd/0.log" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.649772 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.727098 4979 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.790022 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tmjt2_6ed4b9c3-3a9b-4c60-a68b-046cf5288e88/ovs-vswitchd/0.log" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.791179 4979 generic.go:334] "Generic (PLEG): container finished" podID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" exitCode=137 Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.791258 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerDied","Data":"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb"} Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.791340 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmjt2" event={"ID":"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88","Type":"ContainerDied","Data":"af076ee56d5886e64a296e55b03b5bb0ded8de489a95899c61270dac099f1dfe"} Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.791246 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmjt2" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.791383 4979 scope.go:117] "RemoveContainer" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.803167 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3258ad4a-d940-41c3-b875-afadfcc317d4","Type":"ContainerDied","Data":"b5f19eb16c0b9ad8d89d2db8aaef61e8a41afec6d53e30023f1498d447572ee3"} Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.803258 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.815675 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3258ad4a-d940-41c3-b875-afadfcc317d4" (UID: "3258ad4a-d940-41c3-b875-afadfcc317d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.819413 4979 scope.go:117] "RemoveContainer" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.828923 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.828984 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829046 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829083 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829116 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829147 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkgwm\" (UniqueName: \"kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829335 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib" (OuterVolumeSpecName: "var-lib") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829351 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run\") pod \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\" (UID: \"6ed4b9c3-3a9b-4c60-a68b-046cf5288e88\") " Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829321 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log" (OuterVolumeSpecName: "var-log") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829594 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run" (OuterVolumeSpecName: "var-run") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.829948 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3258ad4a-d940-41c3-b875-afadfcc317d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.830024 4979 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.830111 4979 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.830179 4979 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-lib\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.830240 4979 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.846181 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm" (OuterVolumeSpecName: "kube-api-access-wkgwm") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "kube-api-access-wkgwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.848006 4979 scope.go:117] "RemoveContainer" containerID="515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.849328 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts" (OuterVolumeSpecName: "scripts") pod "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" (UID: "6ed4b9c3-3a9b-4c60-a68b-046cf5288e88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.893973 4979 scope.go:117] "RemoveContainer" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" Jan 30 22:07:05 crc kubenswrapper[4979]: E0130 22:07:05.894547 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb\": container with ID starting with ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb not found: ID does not exist" containerID="ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.894599 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb"} err="failed to get container status \"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb\": rpc error: code = NotFound desc = could not find container \"ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb\": container with ID starting with ec44526760a6ae35294ce84dd04dd9a7d1ad783cfa0adf061c488af585fe8bdb not found: ID does not exist" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.894629 4979 scope.go:117] "RemoveContainer" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" Jan 30 22:07:05 crc kubenswrapper[4979]: E0130 22:07:05.894947 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70\": container with ID starting with 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 not found: ID does not exist" containerID="2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.894994 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70"} err="failed to get container status \"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70\": rpc error: code = NotFound desc = could not find container \"2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70\": container with ID starting with 2b976cf9154d634e6411fdf6805a3ee6c9c178f6bc0cdd93c5d788ad1864ff70 not found: ID does not exist" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.895016 4979 scope.go:117] "RemoveContainer" containerID="515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd" Jan 30 22:07:05 crc kubenswrapper[4979]: E0130 22:07:05.895406 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd\": container with ID starting with 515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd not found: ID does not exist" containerID="515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.895443 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd"} err="failed to get container status \"515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd\": rpc error: code = NotFound desc = could not find container \"515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd\": container with ID starting with 515ca13e177b76cb58d0d7b3b62ee04199dc44c9d9bd0a545022e068ac1ce7fd not found: ID does not exist" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.895460 4979 scope.go:117] "RemoveContainer" containerID="453f3cdac4ea155af06a1a316c55ca43062a6082a47aacfa7561eb05a7b482b3" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.921696 4979 scope.go:117] "RemoveContainer" containerID="91cb53bd2b951f74cd0d66aa9f24d08e3c7022176624a9c9ffd768ceb393e191" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.932392 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkgwm\" (UniqueName: \"kubernetes.io/projected/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-kube-api-access-wkgwm\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.932428 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.947542 4979 scope.go:117] "RemoveContainer" containerID="7c505ec2a0f97d2fc0eb2e5eb7103ee437e137790c70cbc45de54bec450be932" Jan 30 22:07:05 crc kubenswrapper[4979]: I0130 22:07:05.984647 4979 scope.go:117] "RemoveContainer" containerID="a13835071a1b225d3d3625f54124a3d5c5460d4ec5e078997b28933e7f7ef915" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.041824 4979 scope.go:117] "RemoveContainer" containerID="c37f40c97c11f5b8472786624973bc5ec2f629f68419fa1a402dc8d14fc3b5c3" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.096273 4979 scope.go:117] "RemoveContainer" containerID="7c50cc4f395d90633fe60bc848afc67d2797c6692e44cd9bffe328b5b54a3a56" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.128435 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.133122 4979 scope.go:117] "RemoveContainer" containerID="34b69c813947c1a15abad9192e8f1cfc7295fd0dfaea4369b35dee2f2f213420" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.157272 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-tmjt2"] Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.164359 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.171368 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.176445 4979 scope.go:117] "RemoveContainer" containerID="fb5eed82db60f42c13875f8180e968872868e5bef720fb14a82263b83c648551" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.197154 4979 scope.go:117] "RemoveContainer" containerID="77c91a8d273f8a0846a55b9f82be6f9553ba25a3808edc69d4b752bae0e84601" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.268293 4979 scope.go:117] "RemoveContainer" containerID="b195485b1f45e76f20aa96948fc15a1ad9a35d2662d43614574d96802f742fb3" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.354291 4979 scope.go:117] "RemoveContainer" containerID="20e0cc7660bd336e138f9bda2b90b0037324c98e23852b050c094fc3ec2b9759" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.440561 4979 scope.go:117] "RemoveContainer" containerID="1f00e517b271012fd6fe85cefca125bfb76b1e353ce30bf2e6c8a97a1b0449c2" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.468396 4979 scope.go:117] "RemoveContainer" containerID="42fc60e63d0f40be8c73517dc917e8fd8f6f546590180e722489b27ebf9825ff" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.487620 4979 scope.go:117] "RemoveContainer" containerID="1fc0f7dc5cf54f3cba376eba063ba52318571cfa76b80fb36465eab8c48ff316" Jan 30 22:07:06 crc kubenswrapper[4979]: I0130 22:07:06.505365 4979 scope.go:117] "RemoveContainer" containerID="9ebde5265edc1759790d3676946d4106e58a2899f6ca92dff07d39b2c655de8d" Jan 30 22:07:07 crc kubenswrapper[4979]: I0130 22:07:07.085813 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" path="/var/lib/kubelet/pods/3258ad4a-d940-41c3-b875-afadfcc317d4/volumes" Jan 30 22:07:07 crc kubenswrapper[4979]: I0130 22:07:07.101417 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" path="/var/lib/kubelet/pods/6ed4b9c3-3a9b-4c60-a68b-046cf5288e88/volumes" Jan 30 22:07:15 crc kubenswrapper[4979]: I0130 22:07:15.074423 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:07:15 crc kubenswrapper[4979]: E0130 22:07:15.077269 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.146976 4979 scope.go:117] "RemoveContainer" containerID="5b349812d2a4fb80dba197720305dc0e90cd12df7c5b2836dc61787bdf46e880" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.192622 4979 scope.go:117] "RemoveContainer" containerID="d2810e946d94d2fead500cfbde94a3439ae19f7224570848395a92c854c19316" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.269839 4979 scope.go:117] "RemoveContainer" containerID="a36d94588495170c1a561d3edd9860fe102e6b36ace67d58883c2b853f52dd2a" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.317103 4979 scope.go:117] "RemoveContainer" containerID="0d4dc8128d54521f9ca5effeeca0076315899d8799e67ef62bddd57c385893e0" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.344237 4979 scope.go:117] "RemoveContainer" containerID="80e1c8de2f5d2def08241e9e838d6caa9d9317d6bfc0e4390d83af93615634c1" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.396550 4979 scope.go:117] "RemoveContainer" containerID="11b12b8a1042240e01cbd94aefdd223922da5bf565812f8e936ee2b92328c29b" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.423896 4979 scope.go:117] "RemoveContainer" containerID="ed4a97cfdf0ceeba9d88157069074ba43b147110d9fc2ad4b1393945bfaa8186" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.458091 4979 scope.go:117] "RemoveContainer" containerID="c2e6fa2e1a73e8bf62b5ee3edf154e0d34b174fdf34335916ed3037f6db0258e" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.482783 4979 scope.go:117] "RemoveContainer" containerID="20c28cbb64eeb54902f8d83f5e5ce1cb0b5f0534acb2d87e4d7c5f48e86998df" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.508214 4979 scope.go:117] "RemoveContainer" containerID="92d1caa7eb5e4a30383396fbbceaf2e0ce7b7c37d00ab11c4913c35b85a605cb" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.539273 4979 scope.go:117] "RemoveContainer" containerID="e769167bc04ee63c4a76adb3fc46279acc328e27ce92e25a4537f461bf8adf9c" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.571831 4979 scope.go:117] "RemoveContainer" containerID="32737030f36aec701cd5a18ee26db33f1920b61eff0e7b5c5143eb68b64ad2a2" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.608246 4979 scope.go:117] "RemoveContainer" containerID="34481ae8a2678ceccfab661611d1800a7d06957c7a2f8615105c54e98d7da90e" Jan 30 22:07:21 crc kubenswrapper[4979]: I0130 22:07:21.629691 4979 scope.go:117] "RemoveContainer" containerID="936faae891dc0d6463f534c26667ac6f817885146529e96b4394369309b4bf52" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.022347 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023193 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023218 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023241 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023251 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023264 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023272 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023283 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="proxy-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023291 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="proxy-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023305 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023312 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023325 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023334 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023347 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" containerName="kube-state-metrics" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023357 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" containerName="kube-state-metrics" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023367 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023375 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023392 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023400 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023414 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023423 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-server" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023433 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023442 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023455 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023463 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023476 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023484 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023498 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-central-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023506 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-central-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023517 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-reaper" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023525 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-reaper" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023535 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="setup-container" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023542 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="setup-container" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023557 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023565 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023577 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="sg-core" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023586 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="sg-core" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023596 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023604 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023613 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023620 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023637 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023646 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023657 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023665 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023676 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023684 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023694 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023701 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023710 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="swift-recon-cron" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023719 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="swift-recon-cron" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023734 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023742 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023752 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023760 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023771 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="setup-container" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023779 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="setup-container" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023793 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023801 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023816 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023824 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-server" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023835 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" containerName="memcached" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023844 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" containerName="memcached" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023853 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="mysql-bootstrap" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023860 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="mysql-bootstrap" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023871 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023879 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023890 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="openstack-network-exporter" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023898 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="openstack-network-exporter" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023908 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="ovn-northd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023916 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="ovn-northd" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023928 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023935 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023945 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-expirer" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023952 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-expirer" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023962 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="galera" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023969 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="galera" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.023979 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-notification-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.023986 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-notification-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024005 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024017 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024069 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024079 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024092 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024100 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024114 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024122 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-server" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024135 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024146 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024164 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024180 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024196 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024206 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024223 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server-init" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024231 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server-init" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024242 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024250 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024263 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="rsync" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024271 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="rsync" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024307 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024315 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024330 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c29874-a63d-4d35-a1a6-256d811ac6f8" containerName="keystone-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024339 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c29874-a63d-4d35-a1a6-256d811ac6f8" containerName="keystone-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024353 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024360 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-api" Jan 30 22:07:24 crc kubenswrapper[4979]: E0130 22:07:24.024374 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024382 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024552 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f627a1e-42e6-4af6-90f1-750c01bcf076" containerName="nova-cell1-conductor-conductor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024570 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovsdb-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024584 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024593 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c29874-a63d-4d35-a1a6-256d811ac6f8" containerName="keystone-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024605 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024617 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28a1e34-b97c-4090-adf8-fa3e2b766365" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024629 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024641 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024651 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024662 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024672 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-reaper" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024684 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae89cf4-f9f4-456b-947f-be87514b79ff" containerName="nova-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024692 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="account-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024703 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024716 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024725 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="sg-core" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024735 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024747 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0b30c9-4972-4476-90e8-eec8d5d44ce5" containerName="ovn-controller" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024757 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024770 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="openstack-network-exporter" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024780 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024788 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-expirer" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024796 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0baa205-eff4-4cad-a27f-db3599bba092" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024808 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024820 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024830 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c466a98-f01c-49ab-841a-8f35c54e71f3" containerName="barbican-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024843 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5eba1b-535d-4519-97c5-5e8b8f003d96" containerName="kube-state-metrics" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024852 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="proxy-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024864 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024874 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-updater" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024883 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="44df4390-d39d-42b7-904c-99d3e9680768" containerName="nova-metadata-metadata" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024896 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-central-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024908 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84d49b8-94bf-46c5-9ca4-eeac44df1d4d" containerName="memcached" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024916 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="object-replicator" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024929 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="981f1fee-4d2a-4d80-bf38-80557b6c5033" containerName="rabbitmq" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024943 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-auditor" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024953 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="rsync" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024963 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b34adef-df84-42dd-a052-5e543c4182b5" containerName="ceilometer-notification-agent" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024972 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c808d1a7-071b-4af7-b86d-adbc0e98803b" containerName="placement-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.024983 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-httpd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025010 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec2e945-509e-4cbb-9988-9f6cc840cd62" containerName="glance-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025019 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfe8d13-8537-4477-ae9e-5c9aa6e104de" containerName="barbican-worker" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025050 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="container-server" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025061 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d2662c-bd60-4a08-accd-e30f0a51518c" containerName="cinder-api-log" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025074 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7cc7cf6-3592-4e25-9578-27ae56d6909b" containerName="ovn-northd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025087 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cc63f5-501a-4bd5-962b-a1f218fbbcdd" containerName="neutron-api" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025097 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3258ad4a-d940-41c3-b875-afadfcc317d4" containerName="swift-recon-cron" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025110 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed4b9c3-3a9b-4c60-a68b-046cf5288e88" containerName="ovs-vswitchd" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.025122 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="6795c6d5-6bb8-432f-b7ca-f29f33298093" containerName="galera" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.026432 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.039636 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.040329 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.040428 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.040525 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vxwg\" (UniqueName: \"kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.141584 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.141699 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.141784 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vxwg\" (UniqueName: \"kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.142422 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.142773 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.169412 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vxwg\" (UniqueName: \"kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg\") pod \"redhat-marketplace-28h5z\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.394560 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:24 crc kubenswrapper[4979]: I0130 22:07:24.855688 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:25 crc kubenswrapper[4979]: I0130 22:07:25.040225 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerStarted","Data":"5f7fa040114608d9bdc5d9422bb6af3a7ca2682adece58a5958352444d4ed476"} Jan 30 22:07:26 crc kubenswrapper[4979]: I0130 22:07:26.051716 4979 generic.go:334] "Generic (PLEG): container finished" podID="801732a2-f62f-4aae-93f9-3aef631c9440" containerID="38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2" exitCode=0 Jan 30 22:07:26 crc kubenswrapper[4979]: I0130 22:07:26.051841 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerDied","Data":"38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2"} Jan 30 22:07:28 crc kubenswrapper[4979]: I0130 22:07:28.069329 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:07:28 crc kubenswrapper[4979]: I0130 22:07:28.069658 4979 generic.go:334] "Generic (PLEG): container finished" podID="801732a2-f62f-4aae-93f9-3aef631c9440" containerID="0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48" exitCode=0 Jan 30 22:07:28 crc kubenswrapper[4979]: I0130 22:07:28.069709 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerDied","Data":"0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48"} Jan 30 22:07:28 crc kubenswrapper[4979]: E0130 22:07:28.069885 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:07:29 crc kubenswrapper[4979]: I0130 22:07:29.080778 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerStarted","Data":"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4"} Jan 30 22:07:29 crc kubenswrapper[4979]: I0130 22:07:29.101174 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-28h5z" podStartSLOduration=2.586897568 podStartE2EDuration="5.101153563s" podCreationTimestamp="2026-01-30 22:07:24 +0000 UTC" firstStartedPulling="2026-01-30 22:07:26.053787793 +0000 UTC m=+1642.015034826" lastFinishedPulling="2026-01-30 22:07:28.568043788 +0000 UTC m=+1644.529290821" observedRunningTime="2026-01-30 22:07:29.097953037 +0000 UTC m=+1645.059200070" watchObservedRunningTime="2026-01-30 22:07:29.101153563 +0000 UTC m=+1645.062400616" Jan 30 22:07:34 crc kubenswrapper[4979]: I0130 22:07:34.395249 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:34 crc kubenswrapper[4979]: I0130 22:07:34.395749 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:34 crc kubenswrapper[4979]: I0130 22:07:34.438152 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:35 crc kubenswrapper[4979]: I0130 22:07:35.171733 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:35 crc kubenswrapper[4979]: I0130 22:07:35.218683 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.154383 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-28h5z" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="registry-server" containerID="cri-o://13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4" gracePeriod=2 Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.547828 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.607606 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities\") pod \"801732a2-f62f-4aae-93f9-3aef631c9440\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.607691 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vxwg\" (UniqueName: \"kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg\") pod \"801732a2-f62f-4aae-93f9-3aef631c9440\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.607728 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content\") pod \"801732a2-f62f-4aae-93f9-3aef631c9440\" (UID: \"801732a2-f62f-4aae-93f9-3aef631c9440\") " Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.608808 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities" (OuterVolumeSpecName: "utilities") pod "801732a2-f62f-4aae-93f9-3aef631c9440" (UID: "801732a2-f62f-4aae-93f9-3aef631c9440"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.614761 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg" (OuterVolumeSpecName: "kube-api-access-8vxwg") pod "801732a2-f62f-4aae-93f9-3aef631c9440" (UID: "801732a2-f62f-4aae-93f9-3aef631c9440"). InnerVolumeSpecName "kube-api-access-8vxwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.644411 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "801732a2-f62f-4aae-93f9-3aef631c9440" (UID: "801732a2-f62f-4aae-93f9-3aef631c9440"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.709909 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.709967 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vxwg\" (UniqueName: \"kubernetes.io/projected/801732a2-f62f-4aae-93f9-3aef631c9440-kube-api-access-8vxwg\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:37 crc kubenswrapper[4979]: I0130 22:07:37.709982 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/801732a2-f62f-4aae-93f9-3aef631c9440-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.165535 4979 generic.go:334] "Generic (PLEG): container finished" podID="801732a2-f62f-4aae-93f9-3aef631c9440" containerID="13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4" exitCode=0 Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.165603 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerDied","Data":"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4"} Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.165643 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28h5z" event={"ID":"801732a2-f62f-4aae-93f9-3aef631c9440","Type":"ContainerDied","Data":"5f7fa040114608d9bdc5d9422bb6af3a7ca2682adece58a5958352444d4ed476"} Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.165666 4979 scope.go:117] "RemoveContainer" containerID="13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.165862 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28h5z" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.185570 4979 scope.go:117] "RemoveContainer" containerID="0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.196682 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.202504 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-28h5z"] Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.223266 4979 scope.go:117] "RemoveContainer" containerID="38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.243748 4979 scope.go:117] "RemoveContainer" containerID="13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4" Jan 30 22:07:38 crc kubenswrapper[4979]: E0130 22:07:38.244288 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4\": container with ID starting with 13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4 not found: ID does not exist" containerID="13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.244918 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4"} err="failed to get container status \"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4\": rpc error: code = NotFound desc = could not find container \"13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4\": container with ID starting with 13e8b91c723af47899b3cc10ebb3ca1fd3f6f31e238482a06e3badd1d09209e4 not found: ID does not exist" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.245020 4979 scope.go:117] "RemoveContainer" containerID="0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48" Jan 30 22:07:38 crc kubenswrapper[4979]: E0130 22:07:38.245739 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48\": container with ID starting with 0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48 not found: ID does not exist" containerID="0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.245783 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48"} err="failed to get container status \"0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48\": rpc error: code = NotFound desc = could not find container \"0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48\": container with ID starting with 0e46ff7bd9541906360885b451860e219941b6ca1a72af76729b9fa57867de48 not found: ID does not exist" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.245810 4979 scope.go:117] "RemoveContainer" containerID="38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2" Jan 30 22:07:38 crc kubenswrapper[4979]: E0130 22:07:38.246503 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2\": container with ID starting with 38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2 not found: ID does not exist" containerID="38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2" Jan 30 22:07:38 crc kubenswrapper[4979]: I0130 22:07:38.246529 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2"} err="failed to get container status \"38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2\": rpc error: code = NotFound desc = could not find container \"38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2\": container with ID starting with 38196c26ac41c16d7a5794aa72db3ca051d122d0aa603c7807db981ea1d1bce2 not found: ID does not exist" Jan 30 22:07:39 crc kubenswrapper[4979]: I0130 22:07:39.079065 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" path="/var/lib/kubelet/pods/801732a2-f62f-4aae-93f9-3aef631c9440/volumes" Jan 30 22:07:41 crc kubenswrapper[4979]: I0130 22:07:41.070189 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:07:41 crc kubenswrapper[4979]: E0130 22:07:41.070495 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:07:52 crc kubenswrapper[4979]: I0130 22:07:52.070505 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:07:52 crc kubenswrapper[4979]: E0130 22:07:52.071343 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:08:04 crc kubenswrapper[4979]: I0130 22:08:04.069731 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:08:04 crc kubenswrapper[4979]: E0130 22:08:04.070758 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:08:15 crc kubenswrapper[4979]: I0130 22:08:15.073623 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:08:15 crc kubenswrapper[4979]: E0130 22:08:15.074432 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.751162 4979 scope.go:117] "RemoveContainer" containerID="e944b74595e093897d5163f1d6f5e2841d79cfe7a27b236506370f93704312ba" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.797913 4979 scope.go:117] "RemoveContainer" containerID="2a983b0743f2b2bf9c796ed27b781636f6d8f9667cb41df9212903e83c5acc92" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.844258 4979 scope.go:117] "RemoveContainer" containerID="d89396dba43eda148feb03a8bfaa17357461f4fc9b9261374a3239bcbd38441a" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.890789 4979 scope.go:117] "RemoveContainer" containerID="79ca49dab9783f66a2ceb714d9fa0a2f61e36e1771efaec7c095de2ed5249a25" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.916318 4979 scope.go:117] "RemoveContainer" containerID="bf7d515c41a90616fc9c098ab7b86a49d6e45238cee5250dcba6e62cadfccb13" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.939151 4979 scope.go:117] "RemoveContainer" containerID="11f2966357f7757e1c5ff42bbe596d8aabdfc99c75e4e411aa7571267254b305" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.962246 4979 scope.go:117] "RemoveContainer" containerID="f22a7e6623c93c4cc030d6b80af43c0a3dcf98b20f173cb5007da0a5eae591f9" Jan 30 22:08:22 crc kubenswrapper[4979]: I0130 22:08:22.994900 4979 scope.go:117] "RemoveContainer" containerID="60202a94174e28cbc487661cc024c8a1cf6c22c3cad5bc10eaa16a6b4124fa58" Jan 30 22:08:23 crc kubenswrapper[4979]: I0130 22:08:23.017969 4979 scope.go:117] "RemoveContainer" containerID="e569170f774015f0e1ddac11812bbd2f299bdb3f6dc5151d5fb36790b57f47e8" Jan 30 22:08:23 crc kubenswrapper[4979]: I0130 22:08:23.046337 4979 scope.go:117] "RemoveContainer" containerID="8b19c508f19bd2ec6e83e05f1f297998c5d48770b15b97debc2ae68900fd6e73" Jan 30 22:08:23 crc kubenswrapper[4979]: I0130 22:08:23.073853 4979 scope.go:117] "RemoveContainer" containerID="3fb131d5453fa0ed56f53c12148fc22c6f507209c0a8f0e89d75133fef0aa6cb" Jan 30 22:08:23 crc kubenswrapper[4979]: I0130 22:08:23.094430 4979 scope.go:117] "RemoveContainer" containerID="046e829584329e51995faf5e5f7dfeed89e26cdea94351a2f27847446a921702" Jan 30 22:08:23 crc kubenswrapper[4979]: I0130 22:08:23.116541 4979 scope.go:117] "RemoveContainer" containerID="aaf97ef50c0887dcb66e3577095047927fdefa42dfe34fc18aab2b8a15ac9805" Jan 30 22:08:30 crc kubenswrapper[4979]: I0130 22:08:30.069392 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:08:30 crc kubenswrapper[4979]: E0130 22:08:30.070302 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:08:43 crc kubenswrapper[4979]: I0130 22:08:43.070458 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:08:43 crc kubenswrapper[4979]: E0130 22:08:43.071489 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:08:57 crc kubenswrapper[4979]: I0130 22:08:57.070571 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:08:57 crc kubenswrapper[4979]: E0130 22:08:57.071647 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:09:10 crc kubenswrapper[4979]: I0130 22:09:10.070231 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:09:10 crc kubenswrapper[4979]: E0130 22:09:10.071382 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.346390 4979 scope.go:117] "RemoveContainer" containerID="68f01a62fc8c4a233f111cbe66a15bcea6da8611b4b7671cb26edad699fef747" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.380400 4979 scope.go:117] "RemoveContainer" containerID="edcc79875734fdba9dd8e28171366d93b289c592ed8ec92b3fba51d021505e99" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.398581 4979 scope.go:117] "RemoveContainer" containerID="d775e4bedb5dba7162d0b89985eadfea2585c2425816a98d45bf2a5aee52a9dc" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.412582 4979 scope.go:117] "RemoveContainer" containerID="009e01f0d8f5d7eb63f0cb71f39fe5ecce8c1604f3d9fcde721ca558795f16e3" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.448373 4979 scope.go:117] "RemoveContainer" containerID="240dc00562487f4f79338fb7476cc903b5a593732bc0312e48d962f852dc3eeb" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.475629 4979 scope.go:117] "RemoveContainer" containerID="b87dfaf39281615f48403ce307bb51ad9f7df21ce90a59879ea17a4270453139" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.496844 4979 scope.go:117] "RemoveContainer" containerID="87b17ed31e0a099bbbdad24d1f20213b81ce5f1d8bbc12cb5d970696a0596091" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.521024 4979 scope.go:117] "RemoveContainer" containerID="4bff6c93d10ae5d79c2f86866faa569249ca91ad63e93e5aed7ec9e5c7ae69e3" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.544228 4979 scope.go:117] "RemoveContainer" containerID="9d8dfa3f28e549253bc3c74adc2593d512df4a8ba19da4e9daca2c7d742b4a42" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.566659 4979 scope.go:117] "RemoveContainer" containerID="db8279f109bd17f628e44659d3d7f1d466d6bb9b71489014bb4d28dd40cb2a62" Jan 30 22:09:23 crc kubenswrapper[4979]: I0130 22:09:23.583330 4979 scope.go:117] "RemoveContainer" containerID="7f05f0be617476aee0f02ee8e76e53920df42776411e8ddeff1d11ffb5f9be89" Jan 30 22:09:24 crc kubenswrapper[4979]: I0130 22:09:24.069844 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:09:24 crc kubenswrapper[4979]: E0130 22:09:24.070253 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:09:35 crc kubenswrapper[4979]: I0130 22:09:35.073999 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:09:35 crc kubenswrapper[4979]: E0130 22:09:35.074933 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:09:49 crc kubenswrapper[4979]: I0130 22:09:49.070337 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:09:49 crc kubenswrapper[4979]: E0130 22:09:49.071404 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:10:04 crc kubenswrapper[4979]: I0130 22:10:04.070347 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:10:04 crc kubenswrapper[4979]: E0130 22:10:04.072739 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:10:17 crc kubenswrapper[4979]: I0130 22:10:17.070012 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:10:17 crc kubenswrapper[4979]: E0130 22:10:17.070800 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.736703 4979 scope.go:117] "RemoveContainer" containerID="33be242a70bfcf61aafc753268bb59c2e8a2a55bfc2666cef9e675491b558cd9" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.762492 4979 scope.go:117] "RemoveContainer" containerID="aa559b1135f6618404d0e60d9a772fc66e419ae78eeefe9bc432ad7bad847635" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.794302 4979 scope.go:117] "RemoveContainer" containerID="3a0f2c5f20fe7df83f657bd57b9e6599013ae4fe90547daa544d3812ba096c45" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.815729 4979 scope.go:117] "RemoveContainer" containerID="4346269c3467fb9983ba22a3da499f523fe4b5d9072377bdb3c9eadf809fe8ff" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.857551 4979 scope.go:117] "RemoveContainer" containerID="78e6994e836809eb6c4147c73b39f8c34653cb31054d04a758e600e5a045351d" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.896305 4979 scope.go:117] "RemoveContainer" containerID="65f7df0a5f220ddf8b419657c4d7771409b9a8c3c511a14b07fabfbb8e20fede" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.918737 4979 scope.go:117] "RemoveContainer" containerID="d40ebbabe3d8f2995f627a1ae83a4f0a8052321d11e2329aba49ee99c9ce1294" Jan 30 22:10:23 crc kubenswrapper[4979]: I0130 22:10:23.949242 4979 scope.go:117] "RemoveContainer" containerID="ba2e39cff92291b5bd37681d66a67ae8cdc39f314eafc2ca6a8f88001981f1b9" Jan 30 22:10:24 crc kubenswrapper[4979]: I0130 22:10:24.004764 4979 scope.go:117] "RemoveContainer" containerID="10bc5c2d6026fb9b6e38741866768cd6cce92452ca56fb4384be71b3bffc65c0" Jan 30 22:10:24 crc kubenswrapper[4979]: I0130 22:10:24.031331 4979 scope.go:117] "RemoveContainer" containerID="2764ceb6c35ea2f48a0d751046545351bbcae998483bb75989d6728581aa19d8" Jan 30 22:10:24 crc kubenswrapper[4979]: I0130 22:10:24.052801 4979 scope.go:117] "RemoveContainer" containerID="d6d25ae31ed5e6d9c7cb7e6adcce8605ff98681415f720f118a7c85b8f2468e0" Jan 30 22:10:24 crc kubenswrapper[4979]: I0130 22:10:24.074962 4979 scope.go:117] "RemoveContainer" containerID="ce15c22300306383eb564954b64ad58a13fe8c8c246e3d682e1063ba2ed2a496" Jan 30 22:10:24 crc kubenswrapper[4979]: I0130 22:10:24.100329 4979 scope.go:117] "RemoveContainer" containerID="70c9e4b75f4b6026504bbe59f295f79a6dc13bad465ac3a98878072f04debbd7" Jan 30 22:10:29 crc kubenswrapper[4979]: I0130 22:10:29.070268 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:10:29 crc kubenswrapper[4979]: E0130 22:10:29.072007 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:10:44 crc kubenswrapper[4979]: I0130 22:10:44.069453 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:10:44 crc kubenswrapper[4979]: E0130 22:10:44.070240 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:10:55 crc kubenswrapper[4979]: I0130 22:10:55.074297 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:10:55 crc kubenswrapper[4979]: E0130 22:10:55.075237 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:11:08 crc kubenswrapper[4979]: I0130 22:11:08.069386 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:11:08 crc kubenswrapper[4979]: I0130 22:11:08.658680 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87"} Jan 30 22:11:24 crc kubenswrapper[4979]: I0130 22:11:24.280615 4979 scope.go:117] "RemoveContainer" containerID="03fcd58bcede39bf0ce2578dd97f75b5dfefffae36f69c196076f3970b1d584e" Jan 30 22:11:24 crc kubenswrapper[4979]: I0130 22:11:24.346146 4979 scope.go:117] "RemoveContainer" containerID="10c1f71e257099ef965fe8ed07f831aabf20fafa7023702d589fe76aa2e8e755" Jan 30 22:11:24 crc kubenswrapper[4979]: I0130 22:11:24.374072 4979 scope.go:117] "RemoveContainer" containerID="4bd5fadc7d49f6d0917b463f6bb16e126a837db96ddad54cd74e72ea4b07d33a" Jan 30 22:11:24 crc kubenswrapper[4979]: I0130 22:11:24.392970 4979 scope.go:117] "RemoveContainer" containerID="5b8c31638b5486835421778350c31d34ef94715ad8979849599bdf9ef248f6ef" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.804103 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:07 crc kubenswrapper[4979]: E0130 22:12:07.805310 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="extract-utilities" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.805332 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="extract-utilities" Jan 30 22:12:07 crc kubenswrapper[4979]: E0130 22:12:07.805374 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="extract-content" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.805384 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="extract-content" Jan 30 22:12:07 crc kubenswrapper[4979]: E0130 22:12:07.805409 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="registry-server" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.805419 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="registry-server" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.805638 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="801732a2-f62f-4aae-93f9-3aef631c9440" containerName="registry-server" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.806876 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.811457 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.878253 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.878316 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vslm7\" (UniqueName: \"kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.878529 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.980619 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.980697 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vslm7\" (UniqueName: \"kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.980758 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.981399 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:07 crc kubenswrapper[4979]: I0130 22:12:07.981480 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:08 crc kubenswrapper[4979]: I0130 22:12:08.004276 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vslm7\" (UniqueName: \"kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7\") pod \"certified-operators-fgn85\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:08 crc kubenswrapper[4979]: I0130 22:12:08.138281 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:08 crc kubenswrapper[4979]: I0130 22:12:08.792219 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.196843 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.203101 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.216793 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.220349 4979 generic.go:334] "Generic (PLEG): container finished" podID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerID="0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4" exitCode=0 Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.220411 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerDied","Data":"0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4"} Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.220451 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerStarted","Data":"532bfd5cbbcfc092ff84e0e7922718c9662053f5efe4ea007105b76084f9b245"} Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.224025 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.304257 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7qv7\" (UniqueName: \"kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.304768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.304801 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.406427 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7qv7\" (UniqueName: \"kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.406501 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.406528 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.407073 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.407324 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.430505 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7qv7\" (UniqueName: \"kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7\") pod \"community-operators-trp49\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:09 crc kubenswrapper[4979]: I0130 22:12:09.530954 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:10 crc kubenswrapper[4979]: I0130 22:12:10.049922 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:10 crc kubenswrapper[4979]: I0130 22:12:10.228353 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerStarted","Data":"96388def628b52996ee4d68f1d8765758fc21882972267a914fcf8190cf29fa3"} Jan 30 22:12:10 crc kubenswrapper[4979]: I0130 22:12:10.228404 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerStarted","Data":"a88de8b1be80ae7f14d10157dabb9bd7056d8724d1af969bce465123bf79c71b"} Jan 30 22:12:10 crc kubenswrapper[4979]: I0130 22:12:10.234054 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerStarted","Data":"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69"} Jan 30 22:12:11 crc kubenswrapper[4979]: I0130 22:12:11.246728 4979 generic.go:334] "Generic (PLEG): container finished" podID="b5d02742-f26c-416a-a917-03ca6eb81632" containerID="96388def628b52996ee4d68f1d8765758fc21882972267a914fcf8190cf29fa3" exitCode=0 Jan 30 22:12:11 crc kubenswrapper[4979]: I0130 22:12:11.246858 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerDied","Data":"96388def628b52996ee4d68f1d8765758fc21882972267a914fcf8190cf29fa3"} Jan 30 22:12:11 crc kubenswrapper[4979]: I0130 22:12:11.250818 4979 generic.go:334] "Generic (PLEG): container finished" podID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerID="2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69" exitCode=0 Jan 30 22:12:11 crc kubenswrapper[4979]: I0130 22:12:11.250880 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerDied","Data":"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69"} Jan 30 22:12:12 crc kubenswrapper[4979]: I0130 22:12:12.262124 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerStarted","Data":"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f"} Jan 30 22:12:12 crc kubenswrapper[4979]: I0130 22:12:12.264441 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerStarted","Data":"60ead3abe567f877266b10190d45b176cc795718ec6efbd3c794bbf2a632ebf9"} Jan 30 22:12:12 crc kubenswrapper[4979]: I0130 22:12:12.292673 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fgn85" podStartSLOduration=2.7385316939999997 podStartE2EDuration="5.292643604s" podCreationTimestamp="2026-01-30 22:12:07 +0000 UTC" firstStartedPulling="2026-01-30 22:12:09.223677242 +0000 UTC m=+1925.184924275" lastFinishedPulling="2026-01-30 22:12:11.777789152 +0000 UTC m=+1927.739036185" observedRunningTime="2026-01-30 22:12:12.284764812 +0000 UTC m=+1928.246011875" watchObservedRunningTime="2026-01-30 22:12:12.292643604 +0000 UTC m=+1928.253890647" Jan 30 22:12:13 crc kubenswrapper[4979]: I0130 22:12:13.274810 4979 generic.go:334] "Generic (PLEG): container finished" podID="b5d02742-f26c-416a-a917-03ca6eb81632" containerID="60ead3abe567f877266b10190d45b176cc795718ec6efbd3c794bbf2a632ebf9" exitCode=0 Jan 30 22:12:13 crc kubenswrapper[4979]: I0130 22:12:13.274909 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerDied","Data":"60ead3abe567f877266b10190d45b176cc795718ec6efbd3c794bbf2a632ebf9"} Jan 30 22:12:14 crc kubenswrapper[4979]: I0130 22:12:14.286581 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerStarted","Data":"197b2659a1779b2eba748a455032820c263e6800b77904f9cf83932e4807aba9"} Jan 30 22:12:14 crc kubenswrapper[4979]: I0130 22:12:14.308900 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-trp49" podStartSLOduration=2.8902174929999997 podStartE2EDuration="5.308871309s" podCreationTimestamp="2026-01-30 22:12:09 +0000 UTC" firstStartedPulling="2026-01-30 22:12:11.249819906 +0000 UTC m=+1927.211066969" lastFinishedPulling="2026-01-30 22:12:13.668473752 +0000 UTC m=+1929.629720785" observedRunningTime="2026-01-30 22:12:14.303840584 +0000 UTC m=+1930.265087617" watchObservedRunningTime="2026-01-30 22:12:14.308871309 +0000 UTC m=+1930.270118352" Jan 30 22:12:18 crc kubenswrapper[4979]: I0130 22:12:18.138673 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:18 crc kubenswrapper[4979]: I0130 22:12:18.139153 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:18 crc kubenswrapper[4979]: I0130 22:12:18.191840 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:18 crc kubenswrapper[4979]: I0130 22:12:18.360474 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:18 crc kubenswrapper[4979]: I0130 22:12:18.981287 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:19 crc kubenswrapper[4979]: I0130 22:12:19.531460 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:19 crc kubenswrapper[4979]: I0130 22:12:19.532240 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:19 crc kubenswrapper[4979]: I0130 22:12:19.580639 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.340281 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fgn85" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="registry-server" containerID="cri-o://f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f" gracePeriod=2 Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.401788 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.733633 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.796190 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content\") pod \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.796312 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vslm7\" (UniqueName: \"kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7\") pod \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.796356 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities\") pod \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\" (UID: \"c48a1a8d-b1ab-431a-87c6-0cba912c20e7\") " Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.798021 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities" (OuterVolumeSpecName: "utilities") pod "c48a1a8d-b1ab-431a-87c6-0cba912c20e7" (UID: "c48a1a8d-b1ab-431a-87c6-0cba912c20e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.831437 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7" (OuterVolumeSpecName: "kube-api-access-vslm7") pod "c48a1a8d-b1ab-431a-87c6-0cba912c20e7" (UID: "c48a1a8d-b1ab-431a-87c6-0cba912c20e7"). InnerVolumeSpecName "kube-api-access-vslm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.897668 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:20 crc kubenswrapper[4979]: I0130 22:12:20.897795 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vslm7\" (UniqueName: \"kubernetes.io/projected/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-kube-api-access-vslm7\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.147646 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c48a1a8d-b1ab-431a-87c6-0cba912c20e7" (UID: "c48a1a8d-b1ab-431a-87c6-0cba912c20e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.204936 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48a1a8d-b1ab-431a-87c6-0cba912c20e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.347857 4979 generic.go:334] "Generic (PLEG): container finished" podID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerID="f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f" exitCode=0 Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.347927 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgn85" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.347975 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerDied","Data":"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f"} Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.348101 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgn85" event={"ID":"c48a1a8d-b1ab-431a-87c6-0cba912c20e7","Type":"ContainerDied","Data":"532bfd5cbbcfc092ff84e0e7922718c9662053f5efe4ea007105b76084f9b245"} Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.348126 4979 scope.go:117] "RemoveContainer" containerID="f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.368114 4979 scope.go:117] "RemoveContainer" containerID="2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.398344 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.401443 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fgn85"] Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.403392 4979 scope.go:117] "RemoveContainer" containerID="0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.428448 4979 scope.go:117] "RemoveContainer" containerID="f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f" Jan 30 22:12:21 crc kubenswrapper[4979]: E0130 22:12:21.429383 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f\": container with ID starting with f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f not found: ID does not exist" containerID="f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.429454 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f"} err="failed to get container status \"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f\": rpc error: code = NotFound desc = could not find container \"f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f\": container with ID starting with f6e133f7b32b14c5f628ea995fa8f2c858d1c85c5427db4aac17d5613970995f not found: ID does not exist" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.429491 4979 scope.go:117] "RemoveContainer" containerID="2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69" Jan 30 22:12:21 crc kubenswrapper[4979]: E0130 22:12:21.430059 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69\": container with ID starting with 2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69 not found: ID does not exist" containerID="2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.430119 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69"} err="failed to get container status \"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69\": rpc error: code = NotFound desc = could not find container \"2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69\": container with ID starting with 2a4ca5374bfb1d72f18c0545e4508c03a6e6b074a1e6039beaea610404d10a69 not found: ID does not exist" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.430153 4979 scope.go:117] "RemoveContainer" containerID="0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4" Jan 30 22:12:21 crc kubenswrapper[4979]: E0130 22:12:21.430630 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4\": container with ID starting with 0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4 not found: ID does not exist" containerID="0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.430686 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4"} err="failed to get container status \"0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4\": rpc error: code = NotFound desc = could not find container \"0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4\": container with ID starting with 0f78024fb94a1665f6820349a08876baf8e0074813123700418761f6bcb5a5d4 not found: ID does not exist" Jan 30 22:12:21 crc kubenswrapper[4979]: I0130 22:12:21.983323 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:22 crc kubenswrapper[4979]: I0130 22:12:22.358118 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-trp49" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="registry-server" containerID="cri-o://197b2659a1779b2eba748a455032820c263e6800b77904f9cf83932e4807aba9" gracePeriod=2 Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.078879 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" path="/var/lib/kubelet/pods/c48a1a8d-b1ab-431a-87c6-0cba912c20e7/volumes" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.370254 4979 generic.go:334] "Generic (PLEG): container finished" podID="b5d02742-f26c-416a-a917-03ca6eb81632" containerID="197b2659a1779b2eba748a455032820c263e6800b77904f9cf83932e4807aba9" exitCode=0 Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.370339 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerDied","Data":"197b2659a1779b2eba748a455032820c263e6800b77904f9cf83932e4807aba9"} Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.439662 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.641401 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities\") pod \"b5d02742-f26c-416a-a917-03ca6eb81632\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.642056 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content\") pod \"b5d02742-f26c-416a-a917-03ca6eb81632\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.642187 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7qv7\" (UniqueName: \"kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7\") pod \"b5d02742-f26c-416a-a917-03ca6eb81632\" (UID: \"b5d02742-f26c-416a-a917-03ca6eb81632\") " Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.642378 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities" (OuterVolumeSpecName: "utilities") pod "b5d02742-f26c-416a-a917-03ca6eb81632" (UID: "b5d02742-f26c-416a-a917-03ca6eb81632"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.642455 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.649433 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7" (OuterVolumeSpecName: "kube-api-access-g7qv7") pod "b5d02742-f26c-416a-a917-03ca6eb81632" (UID: "b5d02742-f26c-416a-a917-03ca6eb81632"). InnerVolumeSpecName "kube-api-access-g7qv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.697892 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5d02742-f26c-416a-a917-03ca6eb81632" (UID: "b5d02742-f26c-416a-a917-03ca6eb81632"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.744868 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7qv7\" (UniqueName: \"kubernetes.io/projected/b5d02742-f26c-416a-a917-03ca6eb81632-kube-api-access-g7qv7\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:23 crc kubenswrapper[4979]: I0130 22:12:23.744919 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d02742-f26c-416a-a917-03ca6eb81632-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.383391 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trp49" event={"ID":"b5d02742-f26c-416a-a917-03ca6eb81632","Type":"ContainerDied","Data":"a88de8b1be80ae7f14d10157dabb9bd7056d8724d1af969bce465123bf79c71b"} Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.383482 4979 scope.go:117] "RemoveContainer" containerID="197b2659a1779b2eba748a455032820c263e6800b77904f9cf83932e4807aba9" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.383520 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trp49" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.425760 4979 scope.go:117] "RemoveContainer" containerID="60ead3abe567f877266b10190d45b176cc795718ec6efbd3c794bbf2a632ebf9" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.434019 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.443265 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-trp49"] Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.451690 4979 scope.go:117] "RemoveContainer" containerID="96388def628b52996ee4d68f1d8765758fc21882972267a914fcf8190cf29fa3" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.514670 4979 scope.go:117] "RemoveContainer" containerID="2f2fbcbfa3fb8957bd22dbbdae0f118ed4065b8e1b28fd2310cab48fd875577d" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.537632 4979 scope.go:117] "RemoveContainer" containerID="748d1a4bd7c293d8968765b3b267f988706b6c7ba86f06948fccdfb30542ea96" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.555628 4979 scope.go:117] "RemoveContainer" containerID="273d72dd649ce744e0e01b7f87b5608830beff1b94683daf56bbf5dd25211839" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.597421 4979 scope.go:117] "RemoveContainer" containerID="99f9e7602668b98789ff476044ada1b106a498ed44ed34ee5c2700adce022186" Jan 30 22:12:24 crc kubenswrapper[4979]: I0130 22:12:24.617675 4979 scope.go:117] "RemoveContainer" containerID="5ac3c882827d52df05b6724629ccc459728f629242f9b9649899fbfb3897e504" Jan 30 22:12:25 crc kubenswrapper[4979]: I0130 22:12:25.078921 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" path="/var/lib/kubelet/pods/b5d02742-f26c-416a-a917-03ca6eb81632/volumes" Jan 30 22:13:32 crc kubenswrapper[4979]: I0130 22:13:32.039704 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:13:32 crc kubenswrapper[4979]: I0130 22:13:32.040359 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:14:02 crc kubenswrapper[4979]: I0130 22:14:02.039929 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:14:02 crc kubenswrapper[4979]: I0130 22:14:02.041156 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.040313 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.041027 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.041093 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.041700 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.041750 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87" gracePeriod=600 Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.421272 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87" exitCode=0 Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.421398 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87"} Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.421817 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467"} Jan 30 22:14:32 crc kubenswrapper[4979]: I0130 22:14:32.421857 4979 scope.go:117] "RemoveContainer" containerID="1ea3e310f037b6f8382faa235dba5e2f7d22ee01bca744f7acf8a5b9c2ecd06c" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.149843 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x"] Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.150891 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.150908 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.150927 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.150936 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.150959 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.150967 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.150980 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.150988 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.151002 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.151010 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4979]: E0130 22:15:00.151023 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.151046 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.151206 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5d02742-f26c-416a-a917-03ca6eb81632" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.151225 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c48a1a8d-b1ab-431a-87c6-0cba912c20e7" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.151873 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.154218 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.159752 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.162851 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x"] Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.258316 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.258405 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4pt8\" (UniqueName: \"kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.258481 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.359335 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4pt8\" (UniqueName: \"kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.359409 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.359450 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.360385 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.370044 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.380115 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4pt8\" (UniqueName: \"kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8\") pod \"collect-profiles-29496855-kz88x\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.488000 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:00 crc kubenswrapper[4979]: I0130 22:15:00.922107 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x"] Jan 30 22:15:01 crc kubenswrapper[4979]: I0130 22:15:01.652810 4979 generic.go:334] "Generic (PLEG): container finished" podID="4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" containerID="3bbe88baa1620c36ba12ba04d5a8542170b476b0b0988530b1848eeba6a89780" exitCode=0 Jan 30 22:15:01 crc kubenswrapper[4979]: I0130 22:15:01.652903 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" event={"ID":"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8","Type":"ContainerDied","Data":"3bbe88baa1620c36ba12ba04d5a8542170b476b0b0988530b1848eeba6a89780"} Jan 30 22:15:01 crc kubenswrapper[4979]: I0130 22:15:01.654123 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" event={"ID":"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8","Type":"ContainerStarted","Data":"d2a08b9f9fb63d024bde97d062854de25051a1d3114221d7de424c9d5a44ccb3"} Jan 30 22:15:02 crc kubenswrapper[4979]: I0130 22:15:02.939015 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.004810 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4pt8\" (UniqueName: \"kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8\") pod \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.004928 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume\") pod \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.005154 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume\") pod \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\" (UID: \"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8\") " Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.005814 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" (UID: "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.013405 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8" (OuterVolumeSpecName: "kube-api-access-b4pt8") pod "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" (UID: "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8"). InnerVolumeSpecName "kube-api-access-b4pt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.014396 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" (UID: "4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.106265 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.106308 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.106320 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4pt8\" (UniqueName: \"kubernetes.io/projected/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8-kube-api-access-b4pt8\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.670705 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" event={"ID":"4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8","Type":"ContainerDied","Data":"d2a08b9f9fb63d024bde97d062854de25051a1d3114221d7de424c9d5a44ccb3"} Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.671237 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2a08b9f9fb63d024bde97d062854de25051a1d3114221d7de424c9d5a44ccb3" Jan 30 22:15:03 crc kubenswrapper[4979]: I0130 22:15:03.670845 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x" Jan 30 22:15:04 crc kubenswrapper[4979]: I0130 22:15:04.045680 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6"] Jan 30 22:15:04 crc kubenswrapper[4979]: I0130 22:15:04.054280 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-qqbl6"] Jan 30 22:15:05 crc kubenswrapper[4979]: I0130 22:15:05.089193 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43f94f0-791b-49cc-afe0-95ec18aa1f07" path="/var/lib/kubelet/pods/b43f94f0-791b-49cc-afe0-95ec18aa1f07/volumes" Jan 30 22:15:24 crc kubenswrapper[4979]: I0130 22:15:24.744820 4979 scope.go:117] "RemoveContainer" containerID="72cb010adee8d42eeef544e6077e19cc4bd21ebcf2f83845c5c858b217b33727" Jan 30 22:16:32 crc kubenswrapper[4979]: I0130 22:16:32.040067 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:16:32 crc kubenswrapper[4979]: I0130 22:16:32.040914 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.283155 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:16:55 crc kubenswrapper[4979]: E0130 22:16:55.284163 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" containerName="collect-profiles" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.284177 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" containerName="collect-profiles" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.284374 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" containerName="collect-profiles" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.285663 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.294150 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.403838 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.403919 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gfc5\" (UniqueName: \"kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.404103 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.505209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.505273 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gfc5\" (UniqueName: \"kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.505305 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.505907 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.505982 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.526062 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gfc5\" (UniqueName: \"kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5\") pod \"redhat-operators-r64t2\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:55 crc kubenswrapper[4979]: I0130 22:16:55.619092 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:16:56 crc kubenswrapper[4979]: I0130 22:16:56.104060 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:16:56 crc kubenswrapper[4979]: I0130 22:16:56.902747 4979 generic.go:334] "Generic (PLEG): container finished" podID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerID="eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c" exitCode=0 Jan 30 22:16:56 crc kubenswrapper[4979]: I0130 22:16:56.902841 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerDied","Data":"eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c"} Jan 30 22:16:56 crc kubenswrapper[4979]: I0130 22:16:56.903161 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerStarted","Data":"b41ac97e91ce5fa78d2960a00ec85ea0838948d02e01ae651778acba106c8d44"} Jan 30 22:16:57 crc kubenswrapper[4979]: I0130 22:16:57.918315 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerStarted","Data":"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e"} Jan 30 22:16:58 crc kubenswrapper[4979]: I0130 22:16:58.966537 4979 generic.go:334] "Generic (PLEG): container finished" podID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerID="ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e" exitCode=0 Jan 30 22:16:58 crc kubenswrapper[4979]: I0130 22:16:58.966836 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerDied","Data":"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e"} Jan 30 22:16:59 crc kubenswrapper[4979]: I0130 22:16:59.977977 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerStarted","Data":"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23"} Jan 30 22:16:59 crc kubenswrapper[4979]: I0130 22:16:59.998189 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r64t2" podStartSLOduration=2.502903448 podStartE2EDuration="4.998169829s" podCreationTimestamp="2026-01-30 22:16:55 +0000 UTC" firstStartedPulling="2026-01-30 22:16:56.905598359 +0000 UTC m=+2212.866845382" lastFinishedPulling="2026-01-30 22:16:59.40086473 +0000 UTC m=+2215.362111763" observedRunningTime="2026-01-30 22:16:59.995952019 +0000 UTC m=+2215.957199052" watchObservedRunningTime="2026-01-30 22:16:59.998169829 +0000 UTC m=+2215.959416862" Jan 30 22:17:02 crc kubenswrapper[4979]: I0130 22:17:02.039898 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:17:02 crc kubenswrapper[4979]: I0130 22:17:02.040085 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:17:05 crc kubenswrapper[4979]: I0130 22:17:05.620223 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:05 crc kubenswrapper[4979]: I0130 22:17:05.620509 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:05 crc kubenswrapper[4979]: I0130 22:17:05.662899 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:06 crc kubenswrapper[4979]: I0130 22:17:06.076754 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:06 crc kubenswrapper[4979]: I0130 22:17:06.123512 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:17:08 crc kubenswrapper[4979]: I0130 22:17:08.045922 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r64t2" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="registry-server" containerID="cri-o://b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23" gracePeriod=2 Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.522723 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.722308 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities\") pod \"5488cdca-2b6c-4fa2-bd28-103b7babd258\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.722479 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content\") pod \"5488cdca-2b6c-4fa2-bd28-103b7babd258\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.722556 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gfc5\" (UniqueName: \"kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5\") pod \"5488cdca-2b6c-4fa2-bd28-103b7babd258\" (UID: \"5488cdca-2b6c-4fa2-bd28-103b7babd258\") " Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.723536 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities" (OuterVolumeSpecName: "utilities") pod "5488cdca-2b6c-4fa2-bd28-103b7babd258" (UID: "5488cdca-2b6c-4fa2-bd28-103b7babd258"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.729121 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5" (OuterVolumeSpecName: "kube-api-access-4gfc5") pod "5488cdca-2b6c-4fa2-bd28-103b7babd258" (UID: "5488cdca-2b6c-4fa2-bd28-103b7babd258"). InnerVolumeSpecName "kube-api-access-4gfc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.823864 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gfc5\" (UniqueName: \"kubernetes.io/projected/5488cdca-2b6c-4fa2-bd28-103b7babd258-kube-api-access-4gfc5\") on node \"crc\" DevicePath \"\"" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.823901 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.852226 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5488cdca-2b6c-4fa2-bd28-103b7babd258" (UID: "5488cdca-2b6c-4fa2-bd28-103b7babd258"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:17:09 crc kubenswrapper[4979]: I0130 22:17:09.925267 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5488cdca-2b6c-4fa2-bd28-103b7babd258-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.061565 4979 generic.go:334] "Generic (PLEG): container finished" podID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerID="b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23" exitCode=0 Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.061620 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerDied","Data":"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23"} Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.061661 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r64t2" event={"ID":"5488cdca-2b6c-4fa2-bd28-103b7babd258","Type":"ContainerDied","Data":"b41ac97e91ce5fa78d2960a00ec85ea0838948d02e01ae651778acba106c8d44"} Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.061657 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r64t2" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.061730 4979 scope.go:117] "RemoveContainer" containerID="b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.082126 4979 scope.go:117] "RemoveContainer" containerID="ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.107523 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.114799 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r64t2"] Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.117963 4979 scope.go:117] "RemoveContainer" containerID="eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.140154 4979 scope.go:117] "RemoveContainer" containerID="b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23" Jan 30 22:17:10 crc kubenswrapper[4979]: E0130 22:17:10.141295 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23\": container with ID starting with b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23 not found: ID does not exist" containerID="b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.141341 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23"} err="failed to get container status \"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23\": rpc error: code = NotFound desc = could not find container \"b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23\": container with ID starting with b176349754f9438c4eff5d5d6f26d9d4ab9f0d0be7c373068bb6abf7e3be2d23 not found: ID does not exist" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.141376 4979 scope.go:117] "RemoveContainer" containerID="ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e" Jan 30 22:17:10 crc kubenswrapper[4979]: E0130 22:17:10.141981 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e\": container with ID starting with ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e not found: ID does not exist" containerID="ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.142111 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e"} err="failed to get container status \"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e\": rpc error: code = NotFound desc = could not find container \"ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e\": container with ID starting with ea0c59dc1f8aec53e1c4afb0765f908b22d246ad2b3a03aec86305f1414ec82e not found: ID does not exist" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.142189 4979 scope.go:117] "RemoveContainer" containerID="eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c" Jan 30 22:17:10 crc kubenswrapper[4979]: E0130 22:17:10.142696 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c\": container with ID starting with eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c not found: ID does not exist" containerID="eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c" Jan 30 22:17:10 crc kubenswrapper[4979]: I0130 22:17:10.142729 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c"} err="failed to get container status \"eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c\": rpc error: code = NotFound desc = could not find container \"eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c\": container with ID starting with eb82f2ccc78c10553face3a186d4fd0b29a1b9cef20c9266a7c58f1c166a392c not found: ID does not exist" Jan 30 22:17:11 crc kubenswrapper[4979]: I0130 22:17:11.078722 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" path="/var/lib/kubelet/pods/5488cdca-2b6c-4fa2-bd28-103b7babd258/volumes" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.039875 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.040664 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.040722 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.041499 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.041555 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" gracePeriod=600 Jan 30 22:17:32 crc kubenswrapper[4979]: E0130 22:17:32.160281 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.216346 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" exitCode=0 Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.216477 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467"} Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.216565 4979 scope.go:117] "RemoveContainer" containerID="5a3026fb9e26d3616c6dc68ee7fd700cea35f3ff62a0741f624c5af22c234a87" Jan 30 22:17:32 crc kubenswrapper[4979]: I0130 22:17:32.217227 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:17:32 crc kubenswrapper[4979]: E0130 22:17:32.217580 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:17:45 crc kubenswrapper[4979]: I0130 22:17:45.074397 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:17:45 crc kubenswrapper[4979]: E0130 22:17:45.075292 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:00 crc kubenswrapper[4979]: I0130 22:18:00.070728 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:18:00 crc kubenswrapper[4979]: E0130 22:18:00.071905 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:11 crc kubenswrapper[4979]: I0130 22:18:11.070666 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:18:11 crc kubenswrapper[4979]: E0130 22:18:11.072324 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:23 crc kubenswrapper[4979]: I0130 22:18:23.069959 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:18:23 crc kubenswrapper[4979]: E0130 22:18:23.071752 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:38 crc kubenswrapper[4979]: I0130 22:18:38.071309 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:18:38 crc kubenswrapper[4979]: E0130 22:18:38.073644 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.069537 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:18:50 crc kubenswrapper[4979]: E0130 22:18:50.070318 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.474775 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:18:50 crc kubenswrapper[4979]: E0130 22:18:50.475195 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="registry-server" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.475215 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="registry-server" Jan 30 22:18:50 crc kubenswrapper[4979]: E0130 22:18:50.475228 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="extract-utilities" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.475236 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="extract-utilities" Jan 30 22:18:50 crc kubenswrapper[4979]: E0130 22:18:50.475257 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="extract-content" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.475264 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="extract-content" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.475453 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5488cdca-2b6c-4fa2-bd28-103b7babd258" containerName="registry-server" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.476654 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.493303 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.602894 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sc84\" (UniqueName: \"kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.603319 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.603525 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.705157 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sc84\" (UniqueName: \"kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.705239 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.705375 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.705886 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.706460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.729144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sc84\" (UniqueName: \"kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84\") pod \"redhat-marketplace-8258p\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:50 crc kubenswrapper[4979]: I0130 22:18:50.802126 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:18:51 crc kubenswrapper[4979]: I0130 22:18:51.271203 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:18:51 crc kubenswrapper[4979]: I0130 22:18:51.978150 4979 generic.go:334] "Generic (PLEG): container finished" podID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerID="3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77" exitCode=0 Jan 30 22:18:51 crc kubenswrapper[4979]: I0130 22:18:51.978202 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerDied","Data":"3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77"} Jan 30 22:18:51 crc kubenswrapper[4979]: I0130 22:18:51.978232 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerStarted","Data":"b7e2aa97e43eb6946831cc7a88d9f10f88fab647f7641efd18f21e2c664f44d8"} Jan 30 22:18:51 crc kubenswrapper[4979]: I0130 22:18:51.981419 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:18:54 crc kubenswrapper[4979]: I0130 22:18:54.754135 4979 generic.go:334] "Generic (PLEG): container finished" podID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerID="c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64" exitCode=0 Jan 30 22:18:54 crc kubenswrapper[4979]: I0130 22:18:54.754228 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerDied","Data":"c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64"} Jan 30 22:18:55 crc kubenswrapper[4979]: I0130 22:18:55.767331 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerStarted","Data":"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186"} Jan 30 22:19:00 crc kubenswrapper[4979]: I0130 22:19:00.802999 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:00 crc kubenswrapper[4979]: I0130 22:19:00.803428 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:00 crc kubenswrapper[4979]: I0130 22:19:00.850427 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:00 crc kubenswrapper[4979]: I0130 22:19:00.869849 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8258p" podStartSLOduration=7.649781619 podStartE2EDuration="10.86982959s" podCreationTimestamp="2026-01-30 22:18:50 +0000 UTC" firstStartedPulling="2026-01-30 22:18:51.980860661 +0000 UTC m=+2327.942107714" lastFinishedPulling="2026-01-30 22:18:55.200908652 +0000 UTC m=+2331.162155685" observedRunningTime="2026-01-30 22:18:55.793755491 +0000 UTC m=+2331.755002524" watchObservedRunningTime="2026-01-30 22:19:00.86982959 +0000 UTC m=+2336.831076623" Jan 30 22:19:01 crc kubenswrapper[4979]: I0130 22:19:01.848002 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:01 crc kubenswrapper[4979]: I0130 22:19:01.895611 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:19:03 crc kubenswrapper[4979]: I0130 22:19:03.819764 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8258p" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="registry-server" containerID="cri-o://83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186" gracePeriod=2 Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.290326 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.421414 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities\") pod \"0cf5e122-2db4-4c3f-b6db-250788b13137\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.421590 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content\") pod \"0cf5e122-2db4-4c3f-b6db-250788b13137\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.421629 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sc84\" (UniqueName: \"kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84\") pod \"0cf5e122-2db4-4c3f-b6db-250788b13137\" (UID: \"0cf5e122-2db4-4c3f-b6db-250788b13137\") " Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.422511 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities" (OuterVolumeSpecName: "utilities") pod "0cf5e122-2db4-4c3f-b6db-250788b13137" (UID: "0cf5e122-2db4-4c3f-b6db-250788b13137"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.427166 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84" (OuterVolumeSpecName: "kube-api-access-5sc84") pod "0cf5e122-2db4-4c3f-b6db-250788b13137" (UID: "0cf5e122-2db4-4c3f-b6db-250788b13137"). InnerVolumeSpecName "kube-api-access-5sc84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.449677 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cf5e122-2db4-4c3f-b6db-250788b13137" (UID: "0cf5e122-2db4-4c3f-b6db-250788b13137"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.523173 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.523244 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf5e122-2db4-4c3f-b6db-250788b13137-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.523259 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sc84\" (UniqueName: \"kubernetes.io/projected/0cf5e122-2db4-4c3f-b6db-250788b13137-kube-api-access-5sc84\") on node \"crc\" DevicePath \"\"" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.828116 4979 generic.go:334] "Generic (PLEG): container finished" podID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerID="83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186" exitCode=0 Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.828164 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8258p" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.828189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerDied","Data":"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186"} Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.828233 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8258p" event={"ID":"0cf5e122-2db4-4c3f-b6db-250788b13137","Type":"ContainerDied","Data":"b7e2aa97e43eb6946831cc7a88d9f10f88fab647f7641efd18f21e2c664f44d8"} Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.828255 4979 scope.go:117] "RemoveContainer" containerID="83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.848395 4979 scope.go:117] "RemoveContainer" containerID="c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.867620 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.872870 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8258p"] Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.888685 4979 scope.go:117] "RemoveContainer" containerID="3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.903896 4979 scope.go:117] "RemoveContainer" containerID="83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186" Jan 30 22:19:04 crc kubenswrapper[4979]: E0130 22:19:04.904281 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186\": container with ID starting with 83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186 not found: ID does not exist" containerID="83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.904326 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186"} err="failed to get container status \"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186\": rpc error: code = NotFound desc = could not find container \"83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186\": container with ID starting with 83af9e97b7b252d74b91f2df186ad1d7c85b0ea0b09a179a8ae63b00c3689186 not found: ID does not exist" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.904354 4979 scope.go:117] "RemoveContainer" containerID="c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64" Jan 30 22:19:04 crc kubenswrapper[4979]: E0130 22:19:04.904591 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64\": container with ID starting with c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64 not found: ID does not exist" containerID="c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.904618 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64"} err="failed to get container status \"c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64\": rpc error: code = NotFound desc = could not find container \"c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64\": container with ID starting with c36c7b61eb0c4748d0675d2f831d07f34afee440823f6ccc6e377c18d994fa64 not found: ID does not exist" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.904634 4979 scope.go:117] "RemoveContainer" containerID="3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77" Jan 30 22:19:04 crc kubenswrapper[4979]: E0130 22:19:04.904902 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77\": container with ID starting with 3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77 not found: ID does not exist" containerID="3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77" Jan 30 22:19:04 crc kubenswrapper[4979]: I0130 22:19:04.904926 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77"} err="failed to get container status \"3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77\": rpc error: code = NotFound desc = could not find container \"3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77\": container with ID starting with 3a5eae5a50c78d2c4a66746304b9d612633502d5a4251e9f601af00252e9ef77 not found: ID does not exist" Jan 30 22:19:05 crc kubenswrapper[4979]: I0130 22:19:05.073727 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:19:05 crc kubenswrapper[4979]: E0130 22:19:05.074609 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:19:05 crc kubenswrapper[4979]: I0130 22:19:05.078702 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" path="/var/lib/kubelet/pods/0cf5e122-2db4-4c3f-b6db-250788b13137/volumes" Jan 30 22:19:18 crc kubenswrapper[4979]: I0130 22:19:18.070531 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:19:18 crc kubenswrapper[4979]: E0130 22:19:18.071482 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:19:31 crc kubenswrapper[4979]: I0130 22:19:31.070238 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:19:31 crc kubenswrapper[4979]: E0130 22:19:31.071313 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:19:42 crc kubenswrapper[4979]: I0130 22:19:42.069789 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:19:42 crc kubenswrapper[4979]: E0130 22:19:42.070478 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:19:55 crc kubenswrapper[4979]: I0130 22:19:55.075263 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:19:55 crc kubenswrapper[4979]: E0130 22:19:55.076339 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:20:09 crc kubenswrapper[4979]: I0130 22:20:09.069777 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:20:09 crc kubenswrapper[4979]: E0130 22:20:09.070592 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:20:21 crc kubenswrapper[4979]: I0130 22:20:21.069955 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:20:21 crc kubenswrapper[4979]: E0130 22:20:21.071013 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:20:32 crc kubenswrapper[4979]: I0130 22:20:32.070277 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:20:32 crc kubenswrapper[4979]: E0130 22:20:32.071045 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:20:45 crc kubenswrapper[4979]: I0130 22:20:45.073636 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:20:45 crc kubenswrapper[4979]: E0130 22:20:45.076305 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:20:56 crc kubenswrapper[4979]: I0130 22:20:56.069619 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:20:56 crc kubenswrapper[4979]: E0130 22:20:56.070489 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:21:10 crc kubenswrapper[4979]: I0130 22:21:10.069844 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:21:10 crc kubenswrapper[4979]: E0130 22:21:10.070683 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:21:23 crc kubenswrapper[4979]: I0130 22:21:23.070642 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:21:23 crc kubenswrapper[4979]: E0130 22:21:23.072093 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:21:35 crc kubenswrapper[4979]: I0130 22:21:35.074607 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:21:35 crc kubenswrapper[4979]: E0130 22:21:35.076357 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:21:47 crc kubenswrapper[4979]: I0130 22:21:47.070392 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:21:47 crc kubenswrapper[4979]: E0130 22:21:47.071047 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:22:01 crc kubenswrapper[4979]: I0130 22:22:01.070301 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:22:01 crc kubenswrapper[4979]: E0130 22:22:01.070982 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.135875 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:12 crc kubenswrapper[4979]: E0130 22:22:12.136849 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="extract-utilities" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.136868 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="extract-utilities" Jan 30 22:22:12 crc kubenswrapper[4979]: E0130 22:22:12.136882 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="extract-content" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.136889 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="extract-content" Jan 30 22:22:12 crc kubenswrapper[4979]: E0130 22:22:12.136921 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="registry-server" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.136930 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="registry-server" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.137118 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf5e122-2db4-4c3f-b6db-250788b13137" containerName="registry-server" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.138420 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.158063 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.288906 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.288983 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.289155 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfthc\" (UniqueName: \"kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.390866 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfthc\" (UniqueName: \"kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.390953 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.390987 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.391552 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.391566 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.420168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfthc\" (UniqueName: \"kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc\") pod \"certified-operators-b9qnz\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.462229 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:12 crc kubenswrapper[4979]: I0130 22:22:12.990087 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:13 crc kubenswrapper[4979]: I0130 22:22:13.267970 4979 generic.go:334] "Generic (PLEG): container finished" podID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerID="fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08" exitCode=0 Jan 30 22:22:13 crc kubenswrapper[4979]: I0130 22:22:13.268044 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerDied","Data":"fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08"} Jan 30 22:22:13 crc kubenswrapper[4979]: I0130 22:22:13.268073 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerStarted","Data":"d781cdde8125c9d069fa1fd3beaffd1896bbc518ab3100d82c8a55ce4a8432fd"} Jan 30 22:22:15 crc kubenswrapper[4979]: I0130 22:22:15.078110 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:22:15 crc kubenswrapper[4979]: E0130 22:22:15.079393 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:22:15 crc kubenswrapper[4979]: I0130 22:22:15.288006 4979 generic.go:334] "Generic (PLEG): container finished" podID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerID="bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8" exitCode=0 Jan 30 22:22:15 crc kubenswrapper[4979]: I0130 22:22:15.288107 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerDied","Data":"bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8"} Jan 30 22:22:17 crc kubenswrapper[4979]: I0130 22:22:17.309309 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerStarted","Data":"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df"} Jan 30 22:22:18 crc kubenswrapper[4979]: I0130 22:22:18.352965 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b9qnz" podStartSLOduration=2.689777086 podStartE2EDuration="6.352934023s" podCreationTimestamp="2026-01-30 22:22:12 +0000 UTC" firstStartedPulling="2026-01-30 22:22:13.270308996 +0000 UTC m=+2529.231556039" lastFinishedPulling="2026-01-30 22:22:16.933465943 +0000 UTC m=+2532.894712976" observedRunningTime="2026-01-30 22:22:18.349365517 +0000 UTC m=+2534.310612560" watchObservedRunningTime="2026-01-30 22:22:18.352934023 +0000 UTC m=+2534.314181066" Jan 30 22:22:22 crc kubenswrapper[4979]: I0130 22:22:22.463061 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:22 crc kubenswrapper[4979]: I0130 22:22:22.463310 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:22 crc kubenswrapper[4979]: I0130 22:22:22.514300 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:23 crc kubenswrapper[4979]: I0130 22:22:23.395524 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:23 crc kubenswrapper[4979]: I0130 22:22:23.444400 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:25 crc kubenswrapper[4979]: I0130 22:22:25.372125 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b9qnz" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="registry-server" containerID="cri-o://2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df" gracePeriod=2 Jan 30 22:22:26 crc kubenswrapper[4979]: I0130 22:22:26.921259 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.067680 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content\") pod \"04c52e7b-84ba-42ed-8feb-0b762719d029\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.067770 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities\") pod \"04c52e7b-84ba-42ed-8feb-0b762719d029\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.067907 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfthc\" (UniqueName: \"kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc\") pod \"04c52e7b-84ba-42ed-8feb-0b762719d029\" (UID: \"04c52e7b-84ba-42ed-8feb-0b762719d029\") " Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.074321 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc" (OuterVolumeSpecName: "kube-api-access-bfthc") pod "04c52e7b-84ba-42ed-8feb-0b762719d029" (UID: "04c52e7b-84ba-42ed-8feb-0b762719d029"). InnerVolumeSpecName "kube-api-access-bfthc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.077431 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities" (OuterVolumeSpecName: "utilities") pod "04c52e7b-84ba-42ed-8feb-0b762719d029" (UID: "04c52e7b-84ba-42ed-8feb-0b762719d029"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.136172 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04c52e7b-84ba-42ed-8feb-0b762719d029" (UID: "04c52e7b-84ba-42ed-8feb-0b762719d029"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.170021 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.170063 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c52e7b-84ba-42ed-8feb-0b762719d029-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.170072 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfthc\" (UniqueName: \"kubernetes.io/projected/04c52e7b-84ba-42ed-8feb-0b762719d029-kube-api-access-bfthc\") on node \"crc\" DevicePath \"\"" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.387254 4979 generic.go:334] "Generic (PLEG): container finished" podID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerID="2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df" exitCode=0 Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.387310 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerDied","Data":"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df"} Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.387345 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9qnz" event={"ID":"04c52e7b-84ba-42ed-8feb-0b762719d029","Type":"ContainerDied","Data":"d781cdde8125c9d069fa1fd3beaffd1896bbc518ab3100d82c8a55ce4a8432fd"} Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.387365 4979 scope.go:117] "RemoveContainer" containerID="2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.387521 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9qnz" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.429586 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.434292 4979 scope.go:117] "RemoveContainer" containerID="bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.437226 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b9qnz"] Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.457857 4979 scope.go:117] "RemoveContainer" containerID="fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.473968 4979 scope.go:117] "RemoveContainer" containerID="2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df" Jan 30 22:22:27 crc kubenswrapper[4979]: E0130 22:22:27.474570 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df\": container with ID starting with 2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df not found: ID does not exist" containerID="2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.474605 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df"} err="failed to get container status \"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df\": rpc error: code = NotFound desc = could not find container \"2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df\": container with ID starting with 2172b96dc9fea387611f8b5ba5d9f9c6994aeb42e2bb8e3baaed02f4ebf138df not found: ID does not exist" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.474640 4979 scope.go:117] "RemoveContainer" containerID="bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8" Jan 30 22:22:27 crc kubenswrapper[4979]: E0130 22:22:27.474999 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8\": container with ID starting with bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8 not found: ID does not exist" containerID="bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.475063 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8"} err="failed to get container status \"bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8\": rpc error: code = NotFound desc = could not find container \"bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8\": container with ID starting with bff3194242cc28a5481a02cb14a73671af62dae3ffb8e37f58813eb96bfe44e8 not found: ID does not exist" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.475092 4979 scope.go:117] "RemoveContainer" containerID="fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08" Jan 30 22:22:27 crc kubenswrapper[4979]: E0130 22:22:27.475387 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08\": container with ID starting with fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08 not found: ID does not exist" containerID="fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08" Jan 30 22:22:27 crc kubenswrapper[4979]: I0130 22:22:27.475413 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08"} err="failed to get container status \"fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08\": rpc error: code = NotFound desc = could not find container \"fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08\": container with ID starting with fa74403b57052176c2d52d6286f81ad7def1082cf061408acfcf8849f868aa08 not found: ID does not exist" Jan 30 22:22:29 crc kubenswrapper[4979]: I0130 22:22:29.069804 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:22:29 crc kubenswrapper[4979]: E0130 22:22:29.070198 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:22:29 crc kubenswrapper[4979]: I0130 22:22:29.078639 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" path="/var/lib/kubelet/pods/04c52e7b-84ba-42ed-8feb-0b762719d029/volumes" Jan 30 22:22:40 crc kubenswrapper[4979]: I0130 22:22:40.069893 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:22:41 crc kubenswrapper[4979]: I0130 22:22:41.491090 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee"} Jan 30 22:23:03 crc kubenswrapper[4979]: I0130 22:23:03.998139 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:04 crc kubenswrapper[4979]: E0130 22:23:03.999144 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="extract-utilities" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:03.999164 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="extract-utilities" Jan 30 22:23:04 crc kubenswrapper[4979]: E0130 22:23:03.999179 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="extract-content" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:03.999189 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="extract-content" Jan 30 22:23:04 crc kubenswrapper[4979]: E0130 22:23:03.999206 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="registry-server" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:03.999214 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="registry-server" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:03.999392 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c52e7b-84ba-42ed-8feb-0b762719d029" containerName="registry-server" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.000571 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.020586 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.079118 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.079207 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqs7r\" (UniqueName: \"kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.079248 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.181209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.181301 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqs7r\" (UniqueName: \"kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.181349 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.181801 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.181795 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.212254 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqs7r\" (UniqueName: \"kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r\") pod \"community-operators-fwrbq\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.364880 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:04 crc kubenswrapper[4979]: I0130 22:23:04.902910 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:05 crc kubenswrapper[4979]: I0130 22:23:05.683220 4979 generic.go:334] "Generic (PLEG): container finished" podID="8592764d-c12c-4340-8bc1-a8ac67545450" containerID="80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676" exitCode=0 Jan 30 22:23:05 crc kubenswrapper[4979]: I0130 22:23:05.683426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerDied","Data":"80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676"} Jan 30 22:23:05 crc kubenswrapper[4979]: I0130 22:23:05.683725 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerStarted","Data":"e10688bf7972c96d2aaf0e41ff4a1887b252366702cb92081eb97dde69ea3dfc"} Jan 30 22:23:06 crc kubenswrapper[4979]: I0130 22:23:06.695472 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerStarted","Data":"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044"} Jan 30 22:23:07 crc kubenswrapper[4979]: I0130 22:23:07.709008 4979 generic.go:334] "Generic (PLEG): container finished" podID="8592764d-c12c-4340-8bc1-a8ac67545450" containerID="3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044" exitCode=0 Jan 30 22:23:07 crc kubenswrapper[4979]: I0130 22:23:07.709094 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerDied","Data":"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044"} Jan 30 22:23:08 crc kubenswrapper[4979]: I0130 22:23:08.721558 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerStarted","Data":"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a"} Jan 30 22:23:08 crc kubenswrapper[4979]: I0130 22:23:08.759134 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fwrbq" podStartSLOduration=3.313729743 podStartE2EDuration="5.759091746s" podCreationTimestamp="2026-01-30 22:23:03 +0000 UTC" firstStartedPulling="2026-01-30 22:23:05.685189586 +0000 UTC m=+2581.646436619" lastFinishedPulling="2026-01-30 22:23:08.130551579 +0000 UTC m=+2584.091798622" observedRunningTime="2026-01-30 22:23:08.738385655 +0000 UTC m=+2584.699632728" watchObservedRunningTime="2026-01-30 22:23:08.759091746 +0000 UTC m=+2584.720338799" Jan 30 22:23:13 crc kubenswrapper[4979]: E0130 22:23:13.801060 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:23:14 crc kubenswrapper[4979]: I0130 22:23:14.365272 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:14 crc kubenswrapper[4979]: I0130 22:23:14.365329 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:14 crc kubenswrapper[4979]: I0130 22:23:14.403110 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:14 crc kubenswrapper[4979]: I0130 22:23:14.839408 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:14 crc kubenswrapper[4979]: I0130 22:23:14.892442 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:16 crc kubenswrapper[4979]: I0130 22:23:16.803537 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fwrbq" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="registry-server" containerID="cri-o://811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a" gracePeriod=2 Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.271108 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.407755 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqs7r\" (UniqueName: \"kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r\") pod \"8592764d-c12c-4340-8bc1-a8ac67545450\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.408081 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities\") pod \"8592764d-c12c-4340-8bc1-a8ac67545450\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.408147 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content\") pod \"8592764d-c12c-4340-8bc1-a8ac67545450\" (UID: \"8592764d-c12c-4340-8bc1-a8ac67545450\") " Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.409211 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities" (OuterVolumeSpecName: "utilities") pod "8592764d-c12c-4340-8bc1-a8ac67545450" (UID: "8592764d-c12c-4340-8bc1-a8ac67545450"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.419295 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r" (OuterVolumeSpecName: "kube-api-access-vqs7r") pod "8592764d-c12c-4340-8bc1-a8ac67545450" (UID: "8592764d-c12c-4340-8bc1-a8ac67545450"). InnerVolumeSpecName "kube-api-access-vqs7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.464381 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8592764d-c12c-4340-8bc1-a8ac67545450" (UID: "8592764d-c12c-4340-8bc1-a8ac67545450"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.509515 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.509548 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8592764d-c12c-4340-8bc1-a8ac67545450-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.509561 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqs7r\" (UniqueName: \"kubernetes.io/projected/8592764d-c12c-4340-8bc1-a8ac67545450-kube-api-access-vqs7r\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.814054 4979 generic.go:334] "Generic (PLEG): container finished" podID="8592764d-c12c-4340-8bc1-a8ac67545450" containerID="811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a" exitCode=0 Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.814178 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwrbq" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.814201 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerDied","Data":"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a"} Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.815609 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwrbq" event={"ID":"8592764d-c12c-4340-8bc1-a8ac67545450","Type":"ContainerDied","Data":"e10688bf7972c96d2aaf0e41ff4a1887b252366702cb92081eb97dde69ea3dfc"} Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.815642 4979 scope.go:117] "RemoveContainer" containerID="811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.840744 4979 scope.go:117] "RemoveContainer" containerID="3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.866199 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.867754 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fwrbq"] Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.884151 4979 scope.go:117] "RemoveContainer" containerID="80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.917599 4979 scope.go:117] "RemoveContainer" containerID="811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a" Jan 30 22:23:17 crc kubenswrapper[4979]: E0130 22:23:17.918159 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a\": container with ID starting with 811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a not found: ID does not exist" containerID="811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.918196 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a"} err="failed to get container status \"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a\": rpc error: code = NotFound desc = could not find container \"811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a\": container with ID starting with 811620099d56665226e290f7396e8c2d91b95fe0edca5714e0a2576f2ce7520a not found: ID does not exist" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.918231 4979 scope.go:117] "RemoveContainer" containerID="3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044" Jan 30 22:23:17 crc kubenswrapper[4979]: E0130 22:23:17.918741 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044\": container with ID starting with 3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044 not found: ID does not exist" containerID="3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.918772 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044"} err="failed to get container status \"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044\": rpc error: code = NotFound desc = could not find container \"3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044\": container with ID starting with 3db05f592080d080492a8eef3ff8da11f24e8608f0b7a8af3c842790279d7044 not found: ID does not exist" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.918793 4979 scope.go:117] "RemoveContainer" containerID="80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676" Jan 30 22:23:17 crc kubenswrapper[4979]: E0130 22:23:17.919107 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676\": container with ID starting with 80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676 not found: ID does not exist" containerID="80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676" Jan 30 22:23:17 crc kubenswrapper[4979]: I0130 22:23:17.919137 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676"} err="failed to get container status \"80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676\": rpc error: code = NotFound desc = could not find container \"80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676\": container with ID starting with 80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676 not found: ID does not exist" Jan 30 22:23:19 crc kubenswrapper[4979]: I0130 22:23:19.088071 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" path="/var/lib/kubelet/pods/8592764d-c12c-4340-8bc1-a8ac67545450/volumes" Jan 30 22:23:24 crc kubenswrapper[4979]: E0130 22:23:24.007859 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:23:34 crc kubenswrapper[4979]: E0130 22:23:34.194507 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:23:44 crc kubenswrapper[4979]: E0130 22:23:44.408289 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:23:54 crc kubenswrapper[4979]: E0130 22:23:54.602021 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:24:04 crc kubenswrapper[4979]: E0130 22:24:04.823207 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8592764d_c12c_4340_8bc1_a8ac67545450.slice/crio-conmon-80ce2cd88f74000a36af85bb035fa6988e72cdb4e11296259e16549604fff676.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:25:02 crc kubenswrapper[4979]: I0130 22:25:02.039631 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:25:02 crc kubenswrapper[4979]: I0130 22:25:02.041507 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:25:32 crc kubenswrapper[4979]: I0130 22:25:32.040161 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:25:32 crc kubenswrapper[4979]: I0130 22:25:32.041664 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.039885 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.040703 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.040794 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.042026 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.042409 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee" gracePeriod=600 Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.413241 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee" exitCode=0 Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.413295 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee"} Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.413332 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084"} Jan 30 22:26:02 crc kubenswrapper[4979]: I0130 22:26:02.413349 4979 scope.go:117] "RemoveContainer" containerID="523e16e58b30abdc7a58297801b52370b066e12111df7cda634db95b87d7b467" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.532700 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:27:55 crc kubenswrapper[4979]: E0130 22:27:55.533495 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="extract-content" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.533506 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="extract-content" Jan 30 22:27:55 crc kubenswrapper[4979]: E0130 22:27:55.533533 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="registry-server" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.533539 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="registry-server" Jan 30 22:27:55 crc kubenswrapper[4979]: E0130 22:27:55.533548 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="extract-utilities" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.533554 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="extract-utilities" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.533687 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8592764d-c12c-4340-8bc1-a8ac67545450" containerName="registry-server" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.534656 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.550065 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.731210 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.731260 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7smfm\" (UniqueName: \"kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.731324 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.832697 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.832744 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7smfm\" (UniqueName: \"kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.832791 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.833343 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.833388 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.856449 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7smfm\" (UniqueName: \"kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm\") pod \"redhat-operators-4xz7x\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:55 crc kubenswrapper[4979]: I0130 22:27:55.870047 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:27:56 crc kubenswrapper[4979]: I0130 22:27:56.111196 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:27:56 crc kubenswrapper[4979]: I0130 22:27:56.351483 4979 generic.go:334] "Generic (PLEG): container finished" podID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerID="4d42c9e9f285f592e473a3839966dfd6596c1bd71041ce1e5dfb19b925de6b58" exitCode=0 Jan 30 22:27:56 crc kubenswrapper[4979]: I0130 22:27:56.351530 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerDied","Data":"4d42c9e9f285f592e473a3839966dfd6596c1bd71041ce1e5dfb19b925de6b58"} Jan 30 22:27:56 crc kubenswrapper[4979]: I0130 22:27:56.351557 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerStarted","Data":"6b99bfd4100719cf898aa3b9840bebde60ca18ebf75e1c0fca9e35c034576f49"} Jan 30 22:27:56 crc kubenswrapper[4979]: I0130 22:27:56.353473 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:27:58 crc kubenswrapper[4979]: I0130 22:27:58.375916 4979 generic.go:334] "Generic (PLEG): container finished" podID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerID="e9715dcca3d3df64fcb241ca9badb177c47124563b895d7221fbe5ccdaa630ab" exitCode=0 Jan 30 22:27:58 crc kubenswrapper[4979]: I0130 22:27:58.376016 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerDied","Data":"e9715dcca3d3df64fcb241ca9badb177c47124563b895d7221fbe5ccdaa630ab"} Jan 30 22:27:59 crc kubenswrapper[4979]: I0130 22:27:59.387579 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerStarted","Data":"4d876e8b8b8c2e1c19e4abdfb8e4b233c94235e5d2b6e812c7c2222da927e592"} Jan 30 22:28:02 crc kubenswrapper[4979]: I0130 22:28:02.040462 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:28:02 crc kubenswrapper[4979]: I0130 22:28:02.040869 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:28:05 crc kubenswrapper[4979]: I0130 22:28:05.870318 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:05 crc kubenswrapper[4979]: I0130 22:28:05.870714 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:05 crc kubenswrapper[4979]: I0130 22:28:05.973113 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:06 crc kubenswrapper[4979]: I0130 22:28:06.004085 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4xz7x" podStartSLOduration=8.567585456 podStartE2EDuration="11.004067146s" podCreationTimestamp="2026-01-30 22:27:55 +0000 UTC" firstStartedPulling="2026-01-30 22:27:56.353259451 +0000 UTC m=+2872.314506484" lastFinishedPulling="2026-01-30 22:27:58.789741141 +0000 UTC m=+2874.750988174" observedRunningTime="2026-01-30 22:27:59.418180584 +0000 UTC m=+2875.379427617" watchObservedRunningTime="2026-01-30 22:28:06.004067146 +0000 UTC m=+2881.965314179" Jan 30 22:28:06 crc kubenswrapper[4979]: I0130 22:28:06.521837 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:06 crc kubenswrapper[4979]: I0130 22:28:06.590256 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:28:08 crc kubenswrapper[4979]: I0130 22:28:08.471951 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4xz7x" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="registry-server" containerID="cri-o://4d876e8b8b8c2e1c19e4abdfb8e4b233c94235e5d2b6e812c7c2222da927e592" gracePeriod=2 Jan 30 22:28:09 crc kubenswrapper[4979]: I0130 22:28:09.485085 4979 generic.go:334] "Generic (PLEG): container finished" podID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerID="4d876e8b8b8c2e1c19e4abdfb8e4b233c94235e5d2b6e812c7c2222da927e592" exitCode=0 Jan 30 22:28:09 crc kubenswrapper[4979]: I0130 22:28:09.485142 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerDied","Data":"4d876e8b8b8c2e1c19e4abdfb8e4b233c94235e5d2b6e812c7c2222da927e592"} Jan 30 22:28:09 crc kubenswrapper[4979]: I0130 22:28:09.990582 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.184899 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities\") pod \"489c8d6c-3ea7-4861-9883-bdd71844292e\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.185130 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7smfm\" (UniqueName: \"kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm\") pod \"489c8d6c-3ea7-4861-9883-bdd71844292e\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.185534 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content\") pod \"489c8d6c-3ea7-4861-9883-bdd71844292e\" (UID: \"489c8d6c-3ea7-4861-9883-bdd71844292e\") " Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.186306 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities" (OuterVolumeSpecName: "utilities") pod "489c8d6c-3ea7-4861-9883-bdd71844292e" (UID: "489c8d6c-3ea7-4861-9883-bdd71844292e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.193536 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm" (OuterVolumeSpecName: "kube-api-access-7smfm") pod "489c8d6c-3ea7-4861-9883-bdd71844292e" (UID: "489c8d6c-3ea7-4861-9883-bdd71844292e"). InnerVolumeSpecName "kube-api-access-7smfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.288317 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7smfm\" (UniqueName: \"kubernetes.io/projected/489c8d6c-3ea7-4861-9883-bdd71844292e-kube-api-access-7smfm\") on node \"crc\" DevicePath \"\"" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.288377 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.318846 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "489c8d6c-3ea7-4861-9883-bdd71844292e" (UID: "489c8d6c-3ea7-4861-9883-bdd71844292e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.390011 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489c8d6c-3ea7-4861-9883-bdd71844292e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.493802 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xz7x" event={"ID":"489c8d6c-3ea7-4861-9883-bdd71844292e","Type":"ContainerDied","Data":"6b99bfd4100719cf898aa3b9840bebde60ca18ebf75e1c0fca9e35c034576f49"} Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.493873 4979 scope.go:117] "RemoveContainer" containerID="4d876e8b8b8c2e1c19e4abdfb8e4b233c94235e5d2b6e812c7c2222da927e592" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.494022 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xz7x" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.519439 4979 scope.go:117] "RemoveContainer" containerID="e9715dcca3d3df64fcb241ca9badb177c47124563b895d7221fbe5ccdaa630ab" Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.528591 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.533923 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4xz7x"] Jan 30 22:28:10 crc kubenswrapper[4979]: I0130 22:28:10.554309 4979 scope.go:117] "RemoveContainer" containerID="4d42c9e9f285f592e473a3839966dfd6596c1bd71041ce1e5dfb19b925de6b58" Jan 30 22:28:11 crc kubenswrapper[4979]: I0130 22:28:11.078085 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" path="/var/lib/kubelet/pods/489c8d6c-3ea7-4861-9883-bdd71844292e/volumes" Jan 30 22:28:32 crc kubenswrapper[4979]: I0130 22:28:32.040413 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:28:32 crc kubenswrapper[4979]: I0130 22:28:32.041126 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.039912 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.041682 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.041792 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.042450 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.042577 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" gracePeriod=600 Jan 30 22:29:02 crc kubenswrapper[4979]: E0130 22:29:02.168269 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.909543 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" exitCode=0 Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.909760 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084"} Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.910012 4979 scope.go:117] "RemoveContainer" containerID="29318cb4e8b5f9a388731de0406342e1d8920bb530cf511d7cbaecb60a3378ee" Jan 30 22:29:02 crc kubenswrapper[4979]: I0130 22:29:02.911126 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:29:02 crc kubenswrapper[4979]: E0130 22:29:02.911606 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:29:18 crc kubenswrapper[4979]: I0130 22:29:18.070430 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:29:18 crc kubenswrapper[4979]: E0130 22:29:18.071825 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.003259 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:23 crc kubenswrapper[4979]: E0130 22:29:23.004098 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="extract-content" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.004110 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="extract-content" Jan 30 22:29:23 crc kubenswrapper[4979]: E0130 22:29:23.004133 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="registry-server" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.004139 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="registry-server" Jan 30 22:29:23 crc kubenswrapper[4979]: E0130 22:29:23.004150 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="extract-utilities" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.004156 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="extract-utilities" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.004311 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="489c8d6c-3ea7-4861-9883-bdd71844292e" containerName="registry-server" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.007568 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.014671 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.086725 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.086795 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2j2j\" (UniqueName: \"kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.086975 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.188651 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.188703 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2j2j\" (UniqueName: \"kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.188734 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.189208 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.189275 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.211709 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2j2j\" (UniqueName: \"kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j\") pod \"redhat-marketplace-8mnlw\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.337641 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:23 crc kubenswrapper[4979]: I0130 22:29:23.594768 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:24 crc kubenswrapper[4979]: I0130 22:29:24.113499 4979 generic.go:334] "Generic (PLEG): container finished" podID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerID="d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b" exitCode=0 Jan 30 22:29:24 crc kubenswrapper[4979]: I0130 22:29:24.113607 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerDied","Data":"d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b"} Jan 30 22:29:24 crc kubenswrapper[4979]: I0130 22:29:24.114102 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerStarted","Data":"90d22e28014a6d48da0d18f0c4f9c02b3aba304fb7c185a521bba041ad9b4dea"} Jan 30 22:29:25 crc kubenswrapper[4979]: I0130 22:29:25.121585 4979 generic.go:334] "Generic (PLEG): container finished" podID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerID="c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5" exitCode=0 Jan 30 22:29:25 crc kubenswrapper[4979]: I0130 22:29:25.121645 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerDied","Data":"c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5"} Jan 30 22:29:26 crc kubenswrapper[4979]: I0130 22:29:26.134392 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerStarted","Data":"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34"} Jan 30 22:29:33 crc kubenswrapper[4979]: I0130 22:29:33.069713 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:29:33 crc kubenswrapper[4979]: E0130 22:29:33.070416 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:29:33 crc kubenswrapper[4979]: I0130 22:29:33.338528 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:33 crc kubenswrapper[4979]: I0130 22:29:33.338595 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:33 crc kubenswrapper[4979]: I0130 22:29:33.401571 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:33 crc kubenswrapper[4979]: I0130 22:29:33.440359 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8mnlw" podStartSLOduration=10.047718304 podStartE2EDuration="11.440337734s" podCreationTimestamp="2026-01-30 22:29:22 +0000 UTC" firstStartedPulling="2026-01-30 22:29:24.11720473 +0000 UTC m=+2960.078451793" lastFinishedPulling="2026-01-30 22:29:25.50982415 +0000 UTC m=+2961.471071223" observedRunningTime="2026-01-30 22:29:26.165531002 +0000 UTC m=+2962.126778045" watchObservedRunningTime="2026-01-30 22:29:33.440337734 +0000 UTC m=+2969.401584767" Jan 30 22:29:34 crc kubenswrapper[4979]: I0130 22:29:34.260258 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:34 crc kubenswrapper[4979]: I0130 22:29:34.309951 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.205831 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8mnlw" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="registry-server" containerID="cri-o://dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34" gracePeriod=2 Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.617810 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.796290 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content\") pod \"1a898665-750f-49f6-8989-dcaf8b7e9f03\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.796457 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities\") pod \"1a898665-750f-49f6-8989-dcaf8b7e9f03\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.796516 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2j2j\" (UniqueName: \"kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j\") pod \"1a898665-750f-49f6-8989-dcaf8b7e9f03\" (UID: \"1a898665-750f-49f6-8989-dcaf8b7e9f03\") " Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.797562 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities" (OuterVolumeSpecName: "utilities") pod "1a898665-750f-49f6-8989-dcaf8b7e9f03" (UID: "1a898665-750f-49f6-8989-dcaf8b7e9f03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.813823 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j" (OuterVolumeSpecName: "kube-api-access-f2j2j") pod "1a898665-750f-49f6-8989-dcaf8b7e9f03" (UID: "1a898665-750f-49f6-8989-dcaf8b7e9f03"). InnerVolumeSpecName "kube-api-access-f2j2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.825304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a898665-750f-49f6-8989-dcaf8b7e9f03" (UID: "1a898665-750f-49f6-8989-dcaf8b7e9f03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.897875 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.897910 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2j2j\" (UniqueName: \"kubernetes.io/projected/1a898665-750f-49f6-8989-dcaf8b7e9f03-kube-api-access-f2j2j\") on node \"crc\" DevicePath \"\"" Jan 30 22:29:36 crc kubenswrapper[4979]: I0130 22:29:36.897922 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a898665-750f-49f6-8989-dcaf8b7e9f03-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.213926 4979 generic.go:334] "Generic (PLEG): container finished" podID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerID="dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34" exitCode=0 Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.213980 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerDied","Data":"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34"} Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.214014 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8mnlw" event={"ID":"1a898665-750f-49f6-8989-dcaf8b7e9f03","Type":"ContainerDied","Data":"90d22e28014a6d48da0d18f0c4f9c02b3aba304fb7c185a521bba041ad9b4dea"} Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.214058 4979 scope.go:117] "RemoveContainer" containerID="dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.214094 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8mnlw" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.236862 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.242863 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8mnlw"] Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.251804 4979 scope.go:117] "RemoveContainer" containerID="c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.269440 4979 scope.go:117] "RemoveContainer" containerID="d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.296003 4979 scope.go:117] "RemoveContainer" containerID="dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34" Jan 30 22:29:37 crc kubenswrapper[4979]: E0130 22:29:37.296498 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34\": container with ID starting with dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34 not found: ID does not exist" containerID="dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.296533 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34"} err="failed to get container status \"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34\": rpc error: code = NotFound desc = could not find container \"dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34\": container with ID starting with dbe15cfec939591728d7008a6cd49ed0938c5517f106a46177ba564710ce2d34 not found: ID does not exist" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.296553 4979 scope.go:117] "RemoveContainer" containerID="c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5" Jan 30 22:29:37 crc kubenswrapper[4979]: E0130 22:29:37.296922 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5\": container with ID starting with c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5 not found: ID does not exist" containerID="c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.296959 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5"} err="failed to get container status \"c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5\": rpc error: code = NotFound desc = could not find container \"c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5\": container with ID starting with c9450f12af1e7eef0c2df62323f24411e9818a38a5c3a5e11ca32df20ee999c5 not found: ID does not exist" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.296977 4979 scope.go:117] "RemoveContainer" containerID="d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b" Jan 30 22:29:37 crc kubenswrapper[4979]: E0130 22:29:37.297396 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b\": container with ID starting with d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b not found: ID does not exist" containerID="d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b" Jan 30 22:29:37 crc kubenswrapper[4979]: I0130 22:29:37.297426 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b"} err="failed to get container status \"d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b\": rpc error: code = NotFound desc = could not find container \"d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b\": container with ID starting with d0a7dc5a84fff1c7b2a37fa3b8e35465c140d6b3117ab5c95c4ed5266e927e8b not found: ID does not exist" Jan 30 22:29:39 crc kubenswrapper[4979]: I0130 22:29:39.084121 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" path="/var/lib/kubelet/pods/1a898665-750f-49f6-8989-dcaf8b7e9f03/volumes" Jan 30 22:29:48 crc kubenswrapper[4979]: I0130 22:29:48.071128 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:29:48 crc kubenswrapper[4979]: E0130 22:29:48.072414 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.164328 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x"] Jan 30 22:30:00 crc kubenswrapper[4979]: E0130 22:30:00.165113 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="extract-content" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.165124 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="extract-content" Jan 30 22:30:00 crc kubenswrapper[4979]: E0130 22:30:00.165133 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="extract-utilities" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.165140 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="extract-utilities" Jan 30 22:30:00 crc kubenswrapper[4979]: E0130 22:30:00.165160 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.165166 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.165307 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a898665-750f-49f6-8989-dcaf8b7e9f03" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.165801 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.168773 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.169429 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.178736 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x"] Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.224395 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.224486 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.224671 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkngc\" (UniqueName: \"kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.327439 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.327494 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.327530 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkngc\" (UniqueName: \"kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.329347 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.334272 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.346180 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkngc\" (UniqueName: \"kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc\") pod \"collect-profiles-29496870-drq2x\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.498681 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:00 crc kubenswrapper[4979]: I0130 22:30:00.978508 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x"] Jan 30 22:30:01 crc kubenswrapper[4979]: I0130 22:30:01.070124 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:30:01 crc kubenswrapper[4979]: E0130 22:30:01.070679 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:30:01 crc kubenswrapper[4979]: I0130 22:30:01.429704 4979 generic.go:334] "Generic (PLEG): container finished" podID="9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" containerID="06f1c39be4f79a10738471e24d46871dad22c8321fde40d1075b882f27317030" exitCode=0 Jan 30 22:30:01 crc kubenswrapper[4979]: I0130 22:30:01.429855 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" event={"ID":"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03","Type":"ContainerDied","Data":"06f1c39be4f79a10738471e24d46871dad22c8321fde40d1075b882f27317030"} Jan 30 22:30:01 crc kubenswrapper[4979]: I0130 22:30:01.430350 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" event={"ID":"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03","Type":"ContainerStarted","Data":"f46b4ebaff8ca26f8b09659d95d6c79fbd608275551a42fefa9b0ff8022bbfb0"} Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.802084 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.870699 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume\") pod \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.870746 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume\") pod \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.870823 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkngc\" (UniqueName: \"kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc\") pod \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\" (UID: \"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03\") " Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.871850 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" (UID: "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.876742 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc" (OuterVolumeSpecName: "kube-api-access-dkngc") pod "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" (UID: "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03"). InnerVolumeSpecName "kube-api-access-dkngc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.880359 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" (UID: "9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.972511 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.972549 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:02 crc kubenswrapper[4979]: I0130 22:30:02.972561 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkngc\" (UniqueName: \"kubernetes.io/projected/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03-kube-api-access-dkngc\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:03 crc kubenswrapper[4979]: I0130 22:30:03.451426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" event={"ID":"9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03","Type":"ContainerDied","Data":"f46b4ebaff8ca26f8b09659d95d6c79fbd608275551a42fefa9b0ff8022bbfb0"} Jan 30 22:30:03 crc kubenswrapper[4979]: I0130 22:30:03.451475 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f46b4ebaff8ca26f8b09659d95d6c79fbd608275551a42fefa9b0ff8022bbfb0" Jan 30 22:30:03 crc kubenswrapper[4979]: I0130 22:30:03.451540 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x" Jan 30 22:30:03 crc kubenswrapper[4979]: I0130 22:30:03.883240 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp"] Jan 30 22:30:03 crc kubenswrapper[4979]: I0130 22:30:03.889637 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-xndlp"] Jan 30 22:30:05 crc kubenswrapper[4979]: I0130 22:30:05.092853 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6fec1a-296c-4b7e-b06f-cb48697ce0aa" path="/var/lib/kubelet/pods/bd6fec1a-296c-4b7e-b06f-cb48697ce0aa/volumes" Jan 30 22:30:15 crc kubenswrapper[4979]: I0130 22:30:15.081546 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:30:15 crc kubenswrapper[4979]: E0130 22:30:15.082559 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:30:25 crc kubenswrapper[4979]: I0130 22:30:25.111697 4979 scope.go:117] "RemoveContainer" containerID="0ffeefd62cefc7a667955d4354abe400003540bade5b7a6dadf2ad36b308e029" Jan 30 22:30:26 crc kubenswrapper[4979]: I0130 22:30:26.069733 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:30:26 crc kubenswrapper[4979]: E0130 22:30:26.070777 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:30:41 crc kubenswrapper[4979]: I0130 22:30:41.070498 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:30:41 crc kubenswrapper[4979]: E0130 22:30:41.071492 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:30:56 crc kubenswrapper[4979]: I0130 22:30:56.069580 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:30:56 crc kubenswrapper[4979]: E0130 22:30:56.070644 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:31:10 crc kubenswrapper[4979]: I0130 22:31:10.069776 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:31:10 crc kubenswrapper[4979]: E0130 22:31:10.070473 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:31:22 crc kubenswrapper[4979]: I0130 22:31:22.070332 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:31:22 crc kubenswrapper[4979]: E0130 22:31:22.071406 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:31:36 crc kubenswrapper[4979]: I0130 22:31:36.069837 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:31:36 crc kubenswrapper[4979]: E0130 22:31:36.094943 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:31:47 crc kubenswrapper[4979]: I0130 22:31:47.079293 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:31:47 crc kubenswrapper[4979]: E0130 22:31:47.081866 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:32:00 crc kubenswrapper[4979]: I0130 22:32:00.069467 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:32:00 crc kubenswrapper[4979]: E0130 22:32:00.070207 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:32:13 crc kubenswrapper[4979]: I0130 22:32:13.069165 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:32:13 crc kubenswrapper[4979]: E0130 22:32:13.069891 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:32:24 crc kubenswrapper[4979]: I0130 22:32:24.069537 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:32:24 crc kubenswrapper[4979]: E0130 22:32:24.070284 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:32:36 crc kubenswrapper[4979]: I0130 22:32:36.069774 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:32:36 crc kubenswrapper[4979]: E0130 22:32:36.071343 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:32:49 crc kubenswrapper[4979]: I0130 22:32:49.071129 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:32:49 crc kubenswrapper[4979]: E0130 22:32:49.073230 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:33:02 crc kubenswrapper[4979]: I0130 22:33:02.069340 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:33:02 crc kubenswrapper[4979]: E0130 22:33:02.069957 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:33:17 crc kubenswrapper[4979]: I0130 22:33:17.070162 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:33:17 crc kubenswrapper[4979]: E0130 22:33:17.070889 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:33:29 crc kubenswrapper[4979]: I0130 22:33:29.070272 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:33:29 crc kubenswrapper[4979]: E0130 22:33:29.071190 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:33:41 crc kubenswrapper[4979]: I0130 22:33:41.070451 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:33:41 crc kubenswrapper[4979]: E0130 22:33:41.071601 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:33:52 crc kubenswrapper[4979]: I0130 22:33:52.069855 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:33:52 crc kubenswrapper[4979]: E0130 22:33:52.070557 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:34:04 crc kubenswrapper[4979]: I0130 22:34:04.070860 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:34:04 crc kubenswrapper[4979]: I0130 22:34:04.524815 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d"} Jan 30 22:36:32 crc kubenswrapper[4979]: I0130 22:36:32.039808 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:36:32 crc kubenswrapper[4979]: I0130 22:36:32.040365 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:37:02 crc kubenswrapper[4979]: I0130 22:37:02.039534 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:37:02 crc kubenswrapper[4979]: I0130 22:37:02.040106 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:37:04 crc kubenswrapper[4979]: I0130 22:37:04.976686 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:04 crc kubenswrapper[4979]: E0130 22:37:04.977483 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" containerName="collect-profiles" Jan 30 22:37:04 crc kubenswrapper[4979]: I0130 22:37:04.977499 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" containerName="collect-profiles" Jan 30 22:37:04 crc kubenswrapper[4979]: I0130 22:37:04.977671 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" containerName="collect-profiles" Jan 30 22:37:04 crc kubenswrapper[4979]: I0130 22:37:04.978901 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:04 crc kubenswrapper[4979]: I0130 22:37:04.993439 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.089419 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.089758 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4mt\" (UniqueName: \"kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.089897 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.190840 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4mt\" (UniqueName: \"kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.191201 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.191316 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.191616 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.191654 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.216126 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4mt\" (UniqueName: \"kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt\") pod \"community-operators-bbfnr\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.299566 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:05 crc kubenswrapper[4979]: I0130 22:37:05.823719 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:06 crc kubenswrapper[4979]: I0130 22:37:06.072969 4979 generic.go:334] "Generic (PLEG): container finished" podID="55590b62-7614-4467-9e71-a7ac065608be" containerID="004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb" exitCode=0 Jan 30 22:37:06 crc kubenswrapper[4979]: I0130 22:37:06.073064 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerDied","Data":"004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb"} Jan 30 22:37:06 crc kubenswrapper[4979]: I0130 22:37:06.073322 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerStarted","Data":"cb846964d8120b4f40a8b60da205d6da39b5487ee4cdf2eab7e41d39dc240a9a"} Jan 30 22:37:06 crc kubenswrapper[4979]: I0130 22:37:06.074498 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:37:07 crc kubenswrapper[4979]: I0130 22:37:07.081948 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerStarted","Data":"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a"} Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.090483 4979 generic.go:334] "Generic (PLEG): container finished" podID="55590b62-7614-4467-9e71-a7ac065608be" containerID="d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a" exitCode=0 Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.090533 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerDied","Data":"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a"} Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.187524 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9jwmc"] Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.189215 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jwmc"] Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.189314 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.237134 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm92m\" (UniqueName: \"kubernetes.io/projected/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-kube-api-access-dm92m\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.237198 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-catalog-content\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.237259 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-utilities\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.338375 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm92m\" (UniqueName: \"kubernetes.io/projected/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-kube-api-access-dm92m\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.338437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-catalog-content\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.339278 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-utilities\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.338472 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-utilities\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.339286 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-catalog-content\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.357004 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm92m\" (UniqueName: \"kubernetes.io/projected/83f3bbd3-c82f-47bd-92fe-4dbe53982abc-kube-api-access-dm92m\") pod \"certified-operators-9jwmc\" (UID: \"83f3bbd3-c82f-47bd-92fe-4dbe53982abc\") " pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:08 crc kubenswrapper[4979]: I0130 22:37:08.525804 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:09 crc kubenswrapper[4979]: I0130 22:37:09.032512 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jwmc"] Jan 30 22:37:09 crc kubenswrapper[4979]: W0130 22:37:09.037230 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f3bbd3_c82f_47bd_92fe_4dbe53982abc.slice/crio-5c53530e5edb67eb6f6224c10b108eceacb32e7b7a659d6c99cb56f24b3969dd WatchSource:0}: Error finding container 5c53530e5edb67eb6f6224c10b108eceacb32e7b7a659d6c99cb56f24b3969dd: Status 404 returned error can't find the container with id 5c53530e5edb67eb6f6224c10b108eceacb32e7b7a659d6c99cb56f24b3969dd Jan 30 22:37:09 crc kubenswrapper[4979]: I0130 22:37:09.098785 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerStarted","Data":"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038"} Jan 30 22:37:09 crc kubenswrapper[4979]: I0130 22:37:09.103967 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jwmc" event={"ID":"83f3bbd3-c82f-47bd-92fe-4dbe53982abc","Type":"ContainerStarted","Data":"5c53530e5edb67eb6f6224c10b108eceacb32e7b7a659d6c99cb56f24b3969dd"} Jan 30 22:37:09 crc kubenswrapper[4979]: I0130 22:37:09.118685 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bbfnr" podStartSLOduration=2.740648238 podStartE2EDuration="5.118648225s" podCreationTimestamp="2026-01-30 22:37:04 +0000 UTC" firstStartedPulling="2026-01-30 22:37:06.074206466 +0000 UTC m=+3422.035453499" lastFinishedPulling="2026-01-30 22:37:08.452206453 +0000 UTC m=+3424.413453486" observedRunningTime="2026-01-30 22:37:09.115822007 +0000 UTC m=+3425.077069050" watchObservedRunningTime="2026-01-30 22:37:09.118648225 +0000 UTC m=+3425.079895258" Jan 30 22:37:10 crc kubenswrapper[4979]: I0130 22:37:10.112069 4979 generic.go:334] "Generic (PLEG): container finished" podID="83f3bbd3-c82f-47bd-92fe-4dbe53982abc" containerID="5806a5523ea433d16d91c6b9e7c9d92d1392042a624ed47b1d34a189b8892a4a" exitCode=0 Jan 30 22:37:10 crc kubenswrapper[4979]: I0130 22:37:10.112139 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jwmc" event={"ID":"83f3bbd3-c82f-47bd-92fe-4dbe53982abc","Type":"ContainerDied","Data":"5806a5523ea433d16d91c6b9e7c9d92d1392042a624ed47b1d34a189b8892a4a"} Jan 30 22:37:14 crc kubenswrapper[4979]: I0130 22:37:14.147744 4979 generic.go:334] "Generic (PLEG): container finished" podID="83f3bbd3-c82f-47bd-92fe-4dbe53982abc" containerID="59a0b05623c4fed8c9ed5ca8dfd1b830c93516a708911d0018f953b6078e7542" exitCode=0 Jan 30 22:37:14 crc kubenswrapper[4979]: I0130 22:37:14.147837 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jwmc" event={"ID":"83f3bbd3-c82f-47bd-92fe-4dbe53982abc","Type":"ContainerDied","Data":"59a0b05623c4fed8c9ed5ca8dfd1b830c93516a708911d0018f953b6078e7542"} Jan 30 22:37:15 crc kubenswrapper[4979]: I0130 22:37:15.178918 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jwmc" event={"ID":"83f3bbd3-c82f-47bd-92fe-4dbe53982abc","Type":"ContainerStarted","Data":"3bf6c9395780112e8c9668989fec70875d5e7948203ae87294f9415bee7bfcf8"} Jan 30 22:37:15 crc kubenswrapper[4979]: I0130 22:37:15.198921 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9jwmc" podStartSLOduration=2.775788769 podStartE2EDuration="7.198898907s" podCreationTimestamp="2026-01-30 22:37:08 +0000 UTC" firstStartedPulling="2026-01-30 22:37:10.115471065 +0000 UTC m=+3426.076718098" lastFinishedPulling="2026-01-30 22:37:14.538581203 +0000 UTC m=+3430.499828236" observedRunningTime="2026-01-30 22:37:15.195898345 +0000 UTC m=+3431.157145378" watchObservedRunningTime="2026-01-30 22:37:15.198898907 +0000 UTC m=+3431.160145940" Jan 30 22:37:15 crc kubenswrapper[4979]: I0130 22:37:15.300802 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:15 crc kubenswrapper[4979]: I0130 22:37:15.300888 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:15 crc kubenswrapper[4979]: I0130 22:37:15.349008 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:16 crc kubenswrapper[4979]: I0130 22:37:16.239481 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:17 crc kubenswrapper[4979]: I0130 22:37:17.168718 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.206099 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bbfnr" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="registry-server" containerID="cri-o://79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038" gracePeriod=2 Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.526704 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.526807 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.600521 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.637768 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.712067 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content\") pod \"55590b62-7614-4467-9e71-a7ac065608be\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.712167 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities\") pod \"55590b62-7614-4467-9e71-a7ac065608be\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.712226 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p4mt\" (UniqueName: \"kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt\") pod \"55590b62-7614-4467-9e71-a7ac065608be\" (UID: \"55590b62-7614-4467-9e71-a7ac065608be\") " Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.713905 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities" (OuterVolumeSpecName: "utilities") pod "55590b62-7614-4467-9e71-a7ac065608be" (UID: "55590b62-7614-4467-9e71-a7ac065608be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.720241 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt" (OuterVolumeSpecName: "kube-api-access-8p4mt") pod "55590b62-7614-4467-9e71-a7ac065608be" (UID: "55590b62-7614-4467-9e71-a7ac065608be"). InnerVolumeSpecName "kube-api-access-8p4mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.766059 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55590b62-7614-4467-9e71-a7ac065608be" (UID: "55590b62-7614-4467-9e71-a7ac065608be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.814084 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.814123 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55590b62-7614-4467-9e71-a7ac065608be-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:18 crc kubenswrapper[4979]: I0130 22:37:18.814135 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p4mt\" (UniqueName: \"kubernetes.io/projected/55590b62-7614-4467-9e71-a7ac065608be-kube-api-access-8p4mt\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.214884 4979 generic.go:334] "Generic (PLEG): container finished" podID="55590b62-7614-4467-9e71-a7ac065608be" containerID="79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038" exitCode=0 Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.214954 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbfnr" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.215023 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerDied","Data":"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038"} Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.215072 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbfnr" event={"ID":"55590b62-7614-4467-9e71-a7ac065608be","Type":"ContainerDied","Data":"cb846964d8120b4f40a8b60da205d6da39b5487ee4cdf2eab7e41d39dc240a9a"} Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.215090 4979 scope.go:117] "RemoveContainer" containerID="79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.240857 4979 scope.go:117] "RemoveContainer" containerID="d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.251999 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.256768 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bbfnr"] Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.272827 4979 scope.go:117] "RemoveContainer" containerID="004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.281177 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9jwmc" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.338105 4979 scope.go:117] "RemoveContainer" containerID="79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038" Jan 30 22:37:19 crc kubenswrapper[4979]: E0130 22:37:19.338778 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038\": container with ID starting with 79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038 not found: ID does not exist" containerID="79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.338831 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038"} err="failed to get container status \"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038\": rpc error: code = NotFound desc = could not find container \"79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038\": container with ID starting with 79def89ca523f88a4d0e91354ed3759180fb1b0b104f2feb144e3e81b6e95038 not found: ID does not exist" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.338867 4979 scope.go:117] "RemoveContainer" containerID="d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a" Jan 30 22:37:19 crc kubenswrapper[4979]: E0130 22:37:19.339420 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a\": container with ID starting with d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a not found: ID does not exist" containerID="d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.339457 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a"} err="failed to get container status \"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a\": rpc error: code = NotFound desc = could not find container \"d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a\": container with ID starting with d99c9bf42bf6be435709235a941df190c608ef3867e0ae7e7a5d8b2424d4b41a not found: ID does not exist" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.339483 4979 scope.go:117] "RemoveContainer" containerID="004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb" Jan 30 22:37:19 crc kubenswrapper[4979]: E0130 22:37:19.339833 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb\": container with ID starting with 004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb not found: ID does not exist" containerID="004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb" Jan 30 22:37:19 crc kubenswrapper[4979]: I0130 22:37:19.339889 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb"} err="failed to get container status \"004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb\": rpc error: code = NotFound desc = could not find container \"004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb\": container with ID starting with 004335d714112ddd145a8567303c3a9c344b4e753bbb1c27b64a9c678db804bb not found: ID does not exist" Jan 30 22:37:20 crc kubenswrapper[4979]: I0130 22:37:20.592293 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jwmc"] Jan 30 22:37:20 crc kubenswrapper[4979]: I0130 22:37:20.966538 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 22:37:20 crc kubenswrapper[4979]: I0130 22:37:20.966865 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mvj6v" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="registry-server" containerID="cri-o://987424a460c36bb8c4afbae895f5e17f696c5e1c101adee6c040d5a1d185626a" gracePeriod=2 Jan 30 22:37:21 crc kubenswrapper[4979]: I0130 22:37:21.000620 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mvj6v" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="registry-server" probeResult="failure" output="" Jan 30 22:37:21 crc kubenswrapper[4979]: I0130 22:37:21.080169 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55590b62-7614-4467-9e71-a7ac065608be" path="/var/lib/kubelet/pods/55590b62-7614-4467-9e71-a7ac065608be/volumes" Jan 30 22:37:21 crc kubenswrapper[4979]: I0130 22:37:21.231560 4979 generic.go:334] "Generic (PLEG): container finished" podID="135dc03e-075f-41a4-934c-8d914d497f69" containerID="987424a460c36bb8c4afbae895f5e17f696c5e1c101adee6c040d5a1d185626a" exitCode=0 Jan 30 22:37:21 crc kubenswrapper[4979]: I0130 22:37:21.231630 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerDied","Data":"987424a460c36bb8c4afbae895f5e17f696c5e1c101adee6c040d5a1d185626a"} Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.243330 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvj6v" event={"ID":"135dc03e-075f-41a4-934c-8d914d497f69","Type":"ContainerDied","Data":"839a0e21c6342d6c49c0683bac9adda801e1ebfd8079dc25226f6fa62891ca90"} Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.243800 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="839a0e21c6342d6c49c0683bac9adda801e1ebfd8079dc25226f6fa62891ca90" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.287060 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.362606 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mp8q\" (UniqueName: \"kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q\") pod \"135dc03e-075f-41a4-934c-8d914d497f69\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.362689 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities\") pod \"135dc03e-075f-41a4-934c-8d914d497f69\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.362800 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content\") pod \"135dc03e-075f-41a4-934c-8d914d497f69\" (UID: \"135dc03e-075f-41a4-934c-8d914d497f69\") " Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.364542 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities" (OuterVolumeSpecName: "utilities") pod "135dc03e-075f-41a4-934c-8d914d497f69" (UID: "135dc03e-075f-41a4-934c-8d914d497f69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.373762 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q" (OuterVolumeSpecName: "kube-api-access-2mp8q") pod "135dc03e-075f-41a4-934c-8d914d497f69" (UID: "135dc03e-075f-41a4-934c-8d914d497f69"). InnerVolumeSpecName "kube-api-access-2mp8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.422111 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "135dc03e-075f-41a4-934c-8d914d497f69" (UID: "135dc03e-075f-41a4-934c-8d914d497f69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.464912 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mp8q\" (UniqueName: \"kubernetes.io/projected/135dc03e-075f-41a4-934c-8d914d497f69-kube-api-access-2mp8q\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.464949 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:22 crc kubenswrapper[4979]: I0130 22:37:22.464958 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/135dc03e-075f-41a4-934c-8d914d497f69-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:37:23 crc kubenswrapper[4979]: I0130 22:37:23.249183 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvj6v" Jan 30 22:37:23 crc kubenswrapper[4979]: I0130 22:37:23.275354 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 22:37:23 crc kubenswrapper[4979]: I0130 22:37:23.281195 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mvj6v"] Jan 30 22:37:25 crc kubenswrapper[4979]: I0130 22:37:25.077503 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135dc03e-075f-41a4-934c-8d914d497f69" path="/var/lib/kubelet/pods/135dc03e-075f-41a4-934c-8d914d497f69/volumes" Jan 30 22:37:25 crc kubenswrapper[4979]: I0130 22:37:25.282719 4979 scope.go:117] "RemoveContainer" containerID="987424a460c36bb8c4afbae895f5e17f696c5e1c101adee6c040d5a1d185626a" Jan 30 22:37:25 crc kubenswrapper[4979]: I0130 22:37:25.313462 4979 scope.go:117] "RemoveContainer" containerID="d404bfe67ff421181512f1fd0ec9b497604ce89b019eae22246b17cef4cbd11a" Jan 30 22:37:25 crc kubenswrapper[4979]: I0130 22:37:25.335949 4979 scope.go:117] "RemoveContainer" containerID="2775cfa6f3efbca70770c0157c242e36a5de365efbaf9c6628031b3077d49317" Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.039956 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.040643 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.040702 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.041430 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.041496 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d" gracePeriod=600 Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.319267 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d" exitCode=0 Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.319334 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d"} Jan 30 22:37:32 crc kubenswrapper[4979]: I0130 22:37:32.319767 4979 scope.go:117] "RemoveContainer" containerID="5790d9b59d4fc4644b57196b1c3569f8e44ae7e3020fc8a1b7e93b5e8248b084" Jan 30 22:37:33 crc kubenswrapper[4979]: I0130 22:37:33.334699 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9"} Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.071226 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073500 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073527 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073544 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073552 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073573 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="extract-utilities" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073580 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="extract-utilities" Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073590 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="extract-content" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073596 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="extract-content" Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073604 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="extract-utilities" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073610 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="extract-utilities" Jan 30 22:38:11 crc kubenswrapper[4979]: E0130 22:38:11.073621 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="extract-content" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073626 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="extract-content" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073756 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="135dc03e-075f-41a4-934c-8d914d497f69" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.073770 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="55590b62-7614-4467-9e71-a7ac065608be" containerName="registry-server" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.078721 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.085335 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.229697 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.229779 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.229847 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppbfm\" (UniqueName: \"kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.331091 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.331198 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppbfm\" (UniqueName: \"kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.331274 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.331651 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.331804 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.354898 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppbfm\" (UniqueName: \"kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm\") pod \"redhat-operators-5bxpk\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.415405 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:11 crc kubenswrapper[4979]: I0130 22:38:11.708421 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:12 crc kubenswrapper[4979]: I0130 22:38:12.650910 4979 generic.go:334] "Generic (PLEG): container finished" podID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerID="645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2" exitCode=0 Jan 30 22:38:12 crc kubenswrapper[4979]: I0130 22:38:12.651095 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerDied","Data":"645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2"} Jan 30 22:38:12 crc kubenswrapper[4979]: I0130 22:38:12.651399 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerStarted","Data":"11196abdf95df0f6495604477e8cd4766707c80d6e4e4037cc9f84915871ee09"} Jan 30 22:38:13 crc kubenswrapper[4979]: I0130 22:38:13.674850 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerStarted","Data":"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396"} Jan 30 22:38:14 crc kubenswrapper[4979]: I0130 22:38:14.685155 4979 generic.go:334] "Generic (PLEG): container finished" podID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerID="27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396" exitCode=0 Jan 30 22:38:14 crc kubenswrapper[4979]: I0130 22:38:14.685217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerDied","Data":"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396"} Jan 30 22:38:15 crc kubenswrapper[4979]: I0130 22:38:15.696681 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerStarted","Data":"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908"} Jan 30 22:38:15 crc kubenswrapper[4979]: I0130 22:38:15.714173 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5bxpk" podStartSLOduration=2.286324164 podStartE2EDuration="4.714151197s" podCreationTimestamp="2026-01-30 22:38:11 +0000 UTC" firstStartedPulling="2026-01-30 22:38:12.652463795 +0000 UTC m=+3488.613710828" lastFinishedPulling="2026-01-30 22:38:15.080290828 +0000 UTC m=+3491.041537861" observedRunningTime="2026-01-30 22:38:15.710707892 +0000 UTC m=+3491.671954925" watchObservedRunningTime="2026-01-30 22:38:15.714151197 +0000 UTC m=+3491.675398230" Jan 30 22:38:21 crc kubenswrapper[4979]: I0130 22:38:21.416335 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:21 crc kubenswrapper[4979]: I0130 22:38:21.416697 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:21 crc kubenswrapper[4979]: I0130 22:38:21.470949 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:21 crc kubenswrapper[4979]: I0130 22:38:21.781336 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:21 crc kubenswrapper[4979]: I0130 22:38:21.833381 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:23 crc kubenswrapper[4979]: I0130 22:38:23.750820 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5bxpk" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="registry-server" containerID="cri-o://859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908" gracePeriod=2 Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.283535 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.442579 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppbfm\" (UniqueName: \"kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm\") pod \"91de2670-3c8a-408b-8f65-742db32eb2a4\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.442641 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities\") pod \"91de2670-3c8a-408b-8f65-742db32eb2a4\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.442729 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content\") pod \"91de2670-3c8a-408b-8f65-742db32eb2a4\" (UID: \"91de2670-3c8a-408b-8f65-742db32eb2a4\") " Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.443687 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities" (OuterVolumeSpecName: "utilities") pod "91de2670-3c8a-408b-8f65-742db32eb2a4" (UID: "91de2670-3c8a-408b-8f65-742db32eb2a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.452673 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm" (OuterVolumeSpecName: "kube-api-access-ppbfm") pod "91de2670-3c8a-408b-8f65-742db32eb2a4" (UID: "91de2670-3c8a-408b-8f65-742db32eb2a4"). InnerVolumeSpecName "kube-api-access-ppbfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.544514 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppbfm\" (UniqueName: \"kubernetes.io/projected/91de2670-3c8a-408b-8f65-742db32eb2a4-kube-api-access-ppbfm\") on node \"crc\" DevicePath \"\"" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.544571 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.559702 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91de2670-3c8a-408b-8f65-742db32eb2a4" (UID: "91de2670-3c8a-408b-8f65-742db32eb2a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.645766 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91de2670-3c8a-408b-8f65-742db32eb2a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.771863 4979 generic.go:334] "Generic (PLEG): container finished" podID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerID="859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908" exitCode=0 Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.771920 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bxpk" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.771957 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerDied","Data":"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908"} Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.772259 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bxpk" event={"ID":"91de2670-3c8a-408b-8f65-742db32eb2a4","Type":"ContainerDied","Data":"11196abdf95df0f6495604477e8cd4766707c80d6e4e4037cc9f84915871ee09"} Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.772280 4979 scope.go:117] "RemoveContainer" containerID="859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.789429 4979 scope.go:117] "RemoveContainer" containerID="27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.807081 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.811721 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5bxpk"] Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.825023 4979 scope.go:117] "RemoveContainer" containerID="645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.850316 4979 scope.go:117] "RemoveContainer" containerID="859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908" Jan 30 22:38:25 crc kubenswrapper[4979]: E0130 22:38:25.850776 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908\": container with ID starting with 859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908 not found: ID does not exist" containerID="859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.850817 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908"} err="failed to get container status \"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908\": rpc error: code = NotFound desc = could not find container \"859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908\": container with ID starting with 859a6786d306abd3081bad8d74cbafd19b1e9e21caa5b228249eb05aa9770908 not found: ID does not exist" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.850845 4979 scope.go:117] "RemoveContainer" containerID="27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396" Jan 30 22:38:25 crc kubenswrapper[4979]: E0130 22:38:25.851244 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396\": container with ID starting with 27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396 not found: ID does not exist" containerID="27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.851278 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396"} err="failed to get container status \"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396\": rpc error: code = NotFound desc = could not find container \"27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396\": container with ID starting with 27d80ae631c2038dace232177efb6bfed54faa25646d50ffad4156d9b3ca4396 not found: ID does not exist" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.851300 4979 scope.go:117] "RemoveContainer" containerID="645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2" Jan 30 22:38:25 crc kubenswrapper[4979]: E0130 22:38:25.851654 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2\": container with ID starting with 645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2 not found: ID does not exist" containerID="645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2" Jan 30 22:38:25 crc kubenswrapper[4979]: I0130 22:38:25.851686 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2"} err="failed to get container status \"645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2\": rpc error: code = NotFound desc = could not find container \"645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2\": container with ID starting with 645ed9d3986867cef5c5a8a4b7692f2a666a5a620c9f0526b8705d8c33db3cf2 not found: ID does not exist" Jan 30 22:38:27 crc kubenswrapper[4979]: I0130 22:38:27.078919 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" path="/var/lib/kubelet/pods/91de2670-3c8a-408b-8f65-742db32eb2a4/volumes" Jan 30 22:39:32 crc kubenswrapper[4979]: I0130 22:39:32.039363 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:39:32 crc kubenswrapper[4979]: I0130 22:39:32.039835 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.552183 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:39:46 crc kubenswrapper[4979]: E0130 22:39:46.552973 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="extract-content" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.552985 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="extract-content" Jan 30 22:39:46 crc kubenswrapper[4979]: E0130 22:39:46.552998 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="registry-server" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.553004 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="registry-server" Jan 30 22:39:46 crc kubenswrapper[4979]: E0130 22:39:46.553020 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="extract-utilities" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.553027 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="extract-utilities" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.553182 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="91de2670-3c8a-408b-8f65-742db32eb2a4" containerName="registry-server" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.554078 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.567828 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.593552 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.593625 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd464\" (UniqueName: \"kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.593656 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.694625 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.694695 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd464\" (UniqueName: \"kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.694723 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.695213 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.695258 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.719799 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd464\" (UniqueName: \"kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464\") pod \"redhat-marketplace-66xk9\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:46 crc kubenswrapper[4979]: I0130 22:39:46.875722 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:47 crc kubenswrapper[4979]: I0130 22:39:47.290232 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:39:47 crc kubenswrapper[4979]: I0130 22:39:47.409408 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerStarted","Data":"d46def05756d6dc16cd8a2911dd8cc842950b9f35c428cde368fcb6a9dc7f78f"} Jan 30 22:39:48 crc kubenswrapper[4979]: I0130 22:39:48.419942 4979 generic.go:334] "Generic (PLEG): container finished" podID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerID="5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722" exitCode=0 Jan 30 22:39:48 crc kubenswrapper[4979]: I0130 22:39:48.420025 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerDied","Data":"5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722"} Jan 30 22:39:49 crc kubenswrapper[4979]: I0130 22:39:49.428181 4979 generic.go:334] "Generic (PLEG): container finished" podID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerID="81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6" exitCode=0 Jan 30 22:39:49 crc kubenswrapper[4979]: I0130 22:39:49.428305 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerDied","Data":"81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6"} Jan 30 22:39:50 crc kubenswrapper[4979]: I0130 22:39:50.441358 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerStarted","Data":"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2"} Jan 30 22:39:56 crc kubenswrapper[4979]: I0130 22:39:56.876882 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:56 crc kubenswrapper[4979]: I0130 22:39:56.877542 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:56 crc kubenswrapper[4979]: I0130 22:39:56.929602 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:56 crc kubenswrapper[4979]: I0130 22:39:56.954621 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-66xk9" podStartSLOduration=9.513907694 podStartE2EDuration="10.954601574s" podCreationTimestamp="2026-01-30 22:39:46 +0000 UTC" firstStartedPulling="2026-01-30 22:39:48.421935315 +0000 UTC m=+3584.383182348" lastFinishedPulling="2026-01-30 22:39:49.862629195 +0000 UTC m=+3585.823876228" observedRunningTime="2026-01-30 22:39:50.469930314 +0000 UTC m=+3586.431177367" watchObservedRunningTime="2026-01-30 22:39:56.954601574 +0000 UTC m=+3592.915848627" Jan 30 22:39:57 crc kubenswrapper[4979]: I0130 22:39:57.542084 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:39:57 crc kubenswrapper[4979]: I0130 22:39:57.597943 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:39:59 crc kubenswrapper[4979]: I0130 22:39:59.512945 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-66xk9" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="registry-server" containerID="cri-o://190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2" gracePeriod=2 Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.019743 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.081961 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd464\" (UniqueName: \"kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464\") pod \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.086497 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464" (OuterVolumeSpecName: "kube-api-access-sd464") pod "c3826ec4-db18-474e-8fbf-0f4fd2c4669f" (UID: "c3826ec4-db18-474e-8fbf-0f4fd2c4669f"). InnerVolumeSpecName "kube-api-access-sd464". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.183617 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content\") pod \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.183670 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities\") pod \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\" (UID: \"c3826ec4-db18-474e-8fbf-0f4fd2c4669f\") " Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.184195 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd464\" (UniqueName: \"kubernetes.io/projected/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-kube-api-access-sd464\") on node \"crc\" DevicePath \"\"" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.184612 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities" (OuterVolumeSpecName: "utilities") pod "c3826ec4-db18-474e-8fbf-0f4fd2c4669f" (UID: "c3826ec4-db18-474e-8fbf-0f4fd2c4669f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.213284 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3826ec4-db18-474e-8fbf-0f4fd2c4669f" (UID: "c3826ec4-db18-474e-8fbf-0f4fd2c4669f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.285696 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.285733 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3826ec4-db18-474e-8fbf-0f4fd2c4669f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.524143 4979 generic.go:334] "Generic (PLEG): container finished" podID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerID="190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2" exitCode=0 Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.524217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerDied","Data":"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2"} Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.524242 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66xk9" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.524288 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66xk9" event={"ID":"c3826ec4-db18-474e-8fbf-0f4fd2c4669f","Type":"ContainerDied","Data":"d46def05756d6dc16cd8a2911dd8cc842950b9f35c428cde368fcb6a9dc7f78f"} Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.524332 4979 scope.go:117] "RemoveContainer" containerID="190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.553484 4979 scope.go:117] "RemoveContainer" containerID="81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.572939 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.583159 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-66xk9"] Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.595456 4979 scope.go:117] "RemoveContainer" containerID="5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.613163 4979 scope.go:117] "RemoveContainer" containerID="190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2" Jan 30 22:40:00 crc kubenswrapper[4979]: E0130 22:40:00.613705 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2\": container with ID starting with 190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2 not found: ID does not exist" containerID="190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.613823 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2"} err="failed to get container status \"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2\": rpc error: code = NotFound desc = could not find container \"190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2\": container with ID starting with 190068e5f094d64b40a71fa9cd66b0f30820d34540c9a1f98f8da96c402057b2 not found: ID does not exist" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.613917 4979 scope.go:117] "RemoveContainer" containerID="81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6" Jan 30 22:40:00 crc kubenswrapper[4979]: E0130 22:40:00.614359 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6\": container with ID starting with 81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6 not found: ID does not exist" containerID="81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.614393 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6"} err="failed to get container status \"81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6\": rpc error: code = NotFound desc = could not find container \"81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6\": container with ID starting with 81eb79d7cfa005a8c2d3584cf95b226173b37fca8d3239fddc312d5f023f2ca6 not found: ID does not exist" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.614416 4979 scope.go:117] "RemoveContainer" containerID="5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722" Jan 30 22:40:00 crc kubenswrapper[4979]: E0130 22:40:00.614738 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722\": container with ID starting with 5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722 not found: ID does not exist" containerID="5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722" Jan 30 22:40:00 crc kubenswrapper[4979]: I0130 22:40:00.614773 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722"} err="failed to get container status \"5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722\": rpc error: code = NotFound desc = could not find container \"5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722\": container with ID starting with 5f8c4756445e03c2bec905b01e5868dd2c3d8111c5f3ad47a9409648d050f722 not found: ID does not exist" Jan 30 22:40:01 crc kubenswrapper[4979]: I0130 22:40:01.086387 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" path="/var/lib/kubelet/pods/c3826ec4-db18-474e-8fbf-0f4fd2c4669f/volumes" Jan 30 22:40:02 crc kubenswrapper[4979]: I0130 22:40:02.040002 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:40:02 crc kubenswrapper[4979]: I0130 22:40:02.040164 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.039915 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.040929 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.041014 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.042310 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.042409 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" gracePeriod=600 Jan 30 22:40:32 crc kubenswrapper[4979]: E0130 22:40:32.168129 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.786643 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" exitCode=0 Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.786704 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9"} Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.786752 4979 scope.go:117] "RemoveContainer" containerID="6507d33392ed644103060903d93e9a938099e8169a78a2f022bc5ff739e88d1d" Jan 30 22:40:32 crc kubenswrapper[4979]: I0130 22:40:32.787385 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:40:32 crc kubenswrapper[4979]: E0130 22:40:32.787650 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:40:43 crc kubenswrapper[4979]: I0130 22:40:43.070527 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:40:43 crc kubenswrapper[4979]: E0130 22:40:43.071640 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:40:55 crc kubenswrapper[4979]: I0130 22:40:55.077955 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:40:55 crc kubenswrapper[4979]: E0130 22:40:55.080025 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:41:06 crc kubenswrapper[4979]: I0130 22:41:06.070311 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:41:06 crc kubenswrapper[4979]: E0130 22:41:06.071168 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:41:20 crc kubenswrapper[4979]: I0130 22:41:20.070138 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:41:20 crc kubenswrapper[4979]: E0130 22:41:20.071021 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:41:33 crc kubenswrapper[4979]: I0130 22:41:33.070259 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:41:33 crc kubenswrapper[4979]: E0130 22:41:33.071089 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:41:45 crc kubenswrapper[4979]: I0130 22:41:45.081503 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:41:45 crc kubenswrapper[4979]: E0130 22:41:45.082527 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:41:56 crc kubenswrapper[4979]: I0130 22:41:56.069252 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:41:56 crc kubenswrapper[4979]: E0130 22:41:56.070085 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:42:08 crc kubenswrapper[4979]: I0130 22:42:08.069848 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:42:08 crc kubenswrapper[4979]: E0130 22:42:08.071451 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:42:23 crc kubenswrapper[4979]: I0130 22:42:23.070592 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:42:23 crc kubenswrapper[4979]: E0130 22:42:23.071361 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:42:36 crc kubenswrapper[4979]: I0130 22:42:36.069913 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:42:36 crc kubenswrapper[4979]: E0130 22:42:36.070696 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:42:49 crc kubenswrapper[4979]: I0130 22:42:49.070845 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:42:49 crc kubenswrapper[4979]: E0130 22:42:49.071724 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:43:00 crc kubenswrapper[4979]: I0130 22:43:00.069699 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:43:00 crc kubenswrapper[4979]: E0130 22:43:00.070638 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:43:15 crc kubenswrapper[4979]: I0130 22:43:15.077842 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:43:15 crc kubenswrapper[4979]: E0130 22:43:15.078772 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:43:30 crc kubenswrapper[4979]: I0130 22:43:30.069140 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:43:30 crc kubenswrapper[4979]: E0130 22:43:30.069903 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:43:43 crc kubenswrapper[4979]: I0130 22:43:43.069350 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:43:43 crc kubenswrapper[4979]: E0130 22:43:43.070148 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:43:57 crc kubenswrapper[4979]: I0130 22:43:57.069311 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:43:57 crc kubenswrapper[4979]: E0130 22:43:57.071076 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:44:11 crc kubenswrapper[4979]: I0130 22:44:11.069143 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:44:11 crc kubenswrapper[4979]: E0130 22:44:11.069879 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:44:23 crc kubenswrapper[4979]: I0130 22:44:23.070428 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:44:23 crc kubenswrapper[4979]: E0130 22:44:23.070878 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:44:38 crc kubenswrapper[4979]: I0130 22:44:38.070287 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:44:38 crc kubenswrapper[4979]: E0130 22:44:38.071334 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:44:51 crc kubenswrapper[4979]: I0130 22:44:51.069951 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:44:51 crc kubenswrapper[4979]: E0130 22:44:51.070681 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.181108 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r"] Jan 30 22:45:00 crc kubenswrapper[4979]: E0130 22:45:00.182180 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.182207 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4979]: E0130 22:45:00.182237 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.182250 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4979]: E0130 22:45:00.182272 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.182283 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.182562 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3826ec4-db18-474e-8fbf-0f4fd2c4669f" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.183335 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.186876 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.187137 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.189990 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r"] Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.290291 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.290405 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.290443 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkjdd\" (UniqueName: \"kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.391864 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.391923 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkjdd\" (UniqueName: \"kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.394537 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.395409 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.407976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.408750 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkjdd\" (UniqueName: \"kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd\") pod \"collect-profiles-29496885-4tk4r\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.504942 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:00 crc kubenswrapper[4979]: I0130 22:45:00.927778 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r"] Jan 30 22:45:00 crc kubenswrapper[4979]: W0130 22:45:00.938213 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod104b2fbe_7925_4ef8_afca_adf78844b1e4.slice/crio-8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1 WatchSource:0}: Error finding container 8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1: Status 404 returned error can't find the container with id 8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1 Jan 30 22:45:01 crc kubenswrapper[4979]: I0130 22:45:01.003999 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" event={"ID":"104b2fbe-7925-4ef8-afca-adf78844b1e4","Type":"ContainerStarted","Data":"8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1"} Jan 30 22:45:02 crc kubenswrapper[4979]: I0130 22:45:02.011931 4979 generic.go:334] "Generic (PLEG): container finished" podID="104b2fbe-7925-4ef8-afca-adf78844b1e4" containerID="f4376d94646a15043c11ecee25a291d34f53ab6e158c8bf8bf94d2318ee02027" exitCode=0 Jan 30 22:45:02 crc kubenswrapper[4979]: I0130 22:45:02.012078 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" event={"ID":"104b2fbe-7925-4ef8-afca-adf78844b1e4","Type":"ContainerDied","Data":"f4376d94646a15043c11ecee25a291d34f53ab6e158c8bf8bf94d2318ee02027"} Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.239537 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.336791 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume\") pod \"104b2fbe-7925-4ef8-afca-adf78844b1e4\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.336905 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume\") pod \"104b2fbe-7925-4ef8-afca-adf78844b1e4\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.336988 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkjdd\" (UniqueName: \"kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd\") pod \"104b2fbe-7925-4ef8-afca-adf78844b1e4\" (UID: \"104b2fbe-7925-4ef8-afca-adf78844b1e4\") " Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.337527 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume" (OuterVolumeSpecName: "config-volume") pod "104b2fbe-7925-4ef8-afca-adf78844b1e4" (UID: "104b2fbe-7925-4ef8-afca-adf78844b1e4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.338210 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/104b2fbe-7925-4ef8-afca-adf78844b1e4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.341933 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd" (OuterVolumeSpecName: "kube-api-access-mkjdd") pod "104b2fbe-7925-4ef8-afca-adf78844b1e4" (UID: "104b2fbe-7925-4ef8-afca-adf78844b1e4"). InnerVolumeSpecName "kube-api-access-mkjdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.343320 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "104b2fbe-7925-4ef8-afca-adf78844b1e4" (UID: "104b2fbe-7925-4ef8-afca-adf78844b1e4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.439619 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkjdd\" (UniqueName: \"kubernetes.io/projected/104b2fbe-7925-4ef8-afca-adf78844b1e4-kube-api-access-mkjdd\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:03 crc kubenswrapper[4979]: I0130 22:45:03.439655 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/104b2fbe-7925-4ef8-afca-adf78844b1e4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:04 crc kubenswrapper[4979]: I0130 22:45:04.026736 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" event={"ID":"104b2fbe-7925-4ef8-afca-adf78844b1e4","Type":"ContainerDied","Data":"8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1"} Jan 30 22:45:04 crc kubenswrapper[4979]: I0130 22:45:04.026771 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7d605fe82000d5fa44e2b110a5356e9f2e082328881ce931e5be66faf8bee1" Jan 30 22:45:04 crc kubenswrapper[4979]: I0130 22:45:04.026808 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r" Jan 30 22:45:04 crc kubenswrapper[4979]: I0130 22:45:04.303456 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4"] Jan 30 22:45:04 crc kubenswrapper[4979]: I0130 22:45:04.310581 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-tqcs4"] Jan 30 22:45:05 crc kubenswrapper[4979]: I0130 22:45:05.076930 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:45:05 crc kubenswrapper[4979]: E0130 22:45:05.077475 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:45:05 crc kubenswrapper[4979]: I0130 22:45:05.092716 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365cfffa-828e-4f0e-9903-4c1580e20c67" path="/var/lib/kubelet/pods/365cfffa-828e-4f0e-9903-4c1580e20c67/volumes" Jan 30 22:45:18 crc kubenswrapper[4979]: I0130 22:45:18.069331 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:45:18 crc kubenswrapper[4979]: E0130 22:45:18.069983 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:45:25 crc kubenswrapper[4979]: I0130 22:45:25.544252 4979 scope.go:117] "RemoveContainer" containerID="63071af88423f456a45a4b58ad51314f65c32700ee4fa8a2ebb6bbca8fea7b68" Jan 30 22:45:30 crc kubenswrapper[4979]: I0130 22:45:30.070241 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:45:30 crc kubenswrapper[4979]: E0130 22:45:30.071132 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:45:43 crc kubenswrapper[4979]: I0130 22:45:43.070302 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:45:43 crc kubenswrapper[4979]: I0130 22:45:43.296090 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e"} Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.337585 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:27 crc kubenswrapper[4979]: E0130 22:47:27.345711 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104b2fbe-7925-4ef8-afca-adf78844b1e4" containerName="collect-profiles" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.345733 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="104b2fbe-7925-4ef8-afca-adf78844b1e4" containerName="collect-profiles" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.345930 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="104b2fbe-7925-4ef8-afca-adf78844b1e4" containerName="collect-profiles" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.347230 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.347710 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.493585 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.493637 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mcc\" (UniqueName: \"kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.493796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.594567 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.594626 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.594647 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57mcc\" (UniqueName: \"kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.595235 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.595292 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.615459 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57mcc\" (UniqueName: \"kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc\") pod \"certified-operators-bgb7v\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:27 crc kubenswrapper[4979]: I0130 22:47:27.670696 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:28 crc kubenswrapper[4979]: I0130 22:47:28.201944 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:29 crc kubenswrapper[4979]: I0130 22:47:29.119448 4979 generic.go:334] "Generic (PLEG): container finished" podID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerID="4283e036f0bfd880adf92b50ac2f32a4a2845dd240c425f041c8745290cf9cd6" exitCode=0 Jan 30 22:47:29 crc kubenswrapper[4979]: I0130 22:47:29.119563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerDied","Data":"4283e036f0bfd880adf92b50ac2f32a4a2845dd240c425f041c8745290cf9cd6"} Jan 30 22:47:29 crc kubenswrapper[4979]: I0130 22:47:29.119805 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerStarted","Data":"2c8eee9870667e78df791eca9d462625a8b2ae9eab002a6e958a2d7adf4b6611"} Jan 30 22:47:29 crc kubenswrapper[4979]: I0130 22:47:29.123005 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:47:30 crc kubenswrapper[4979]: I0130 22:47:30.128078 4979 generic.go:334] "Generic (PLEG): container finished" podID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerID="bee75989ae32b9e3da9cd5d54c7b52fae48857d4c521afab1b9f1195918e3919" exitCode=0 Jan 30 22:47:30 crc kubenswrapper[4979]: I0130 22:47:30.128174 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerDied","Data":"bee75989ae32b9e3da9cd5d54c7b52fae48857d4c521afab1b9f1195918e3919"} Jan 30 22:47:31 crc kubenswrapper[4979]: I0130 22:47:31.138213 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerStarted","Data":"3db7188101669d98aeea1cda01ca1c0f031711d41a8e5d6b6bb60560f0e05f79"} Jan 30 22:47:31 crc kubenswrapper[4979]: I0130 22:47:31.176573 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bgb7v" podStartSLOduration=2.554934426 podStartE2EDuration="4.176548653s" podCreationTimestamp="2026-01-30 22:47:27 +0000 UTC" firstStartedPulling="2026-01-30 22:47:29.122555357 +0000 UTC m=+4045.083802430" lastFinishedPulling="2026-01-30 22:47:30.744169634 +0000 UTC m=+4046.705416657" observedRunningTime="2026-01-30 22:47:31.157126238 +0000 UTC m=+4047.118373271" watchObservedRunningTime="2026-01-30 22:47:31.176548653 +0000 UTC m=+4047.137795686" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.049740 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.051844 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.065258 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.222815 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.223059 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpf27\" (UniqueName: \"kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.223294 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.324519 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpf27\" (UniqueName: \"kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.324667 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.324710 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.325383 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.325469 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.353848 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpf27\" (UniqueName: \"kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27\") pod \"community-operators-kw66v\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.377439 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:35 crc kubenswrapper[4979]: I0130 22:47:35.843141 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:36 crc kubenswrapper[4979]: I0130 22:47:36.169067 4979 generic.go:334] "Generic (PLEG): container finished" podID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerID="26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf" exitCode=0 Jan 30 22:47:36 crc kubenswrapper[4979]: I0130 22:47:36.169109 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerDied","Data":"26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf"} Jan 30 22:47:36 crc kubenswrapper[4979]: I0130 22:47:36.169134 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerStarted","Data":"ab59619a27c710eb68b79d0a064ccdbed30ed0efc3ed64a23d934642a11a4801"} Jan 30 22:47:37 crc kubenswrapper[4979]: I0130 22:47:37.178689 4979 generic.go:334] "Generic (PLEG): container finished" podID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerID="31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a" exitCode=0 Jan 30 22:47:37 crc kubenswrapper[4979]: I0130 22:47:37.178778 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerDied","Data":"31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a"} Jan 30 22:47:37 crc kubenswrapper[4979]: I0130 22:47:37.671084 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:37 crc kubenswrapper[4979]: I0130 22:47:37.671674 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:37 crc kubenswrapper[4979]: I0130 22:47:37.711862 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:38 crc kubenswrapper[4979]: I0130 22:47:38.188222 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerStarted","Data":"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d"} Jan 30 22:47:38 crc kubenswrapper[4979]: I0130 22:47:38.210999 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kw66v" podStartSLOduration=1.839325543 podStartE2EDuration="3.21098438s" podCreationTimestamp="2026-01-30 22:47:35 +0000 UTC" firstStartedPulling="2026-01-30 22:47:36.170539821 +0000 UTC m=+4052.131786854" lastFinishedPulling="2026-01-30 22:47:37.542198658 +0000 UTC m=+4053.503445691" observedRunningTime="2026-01-30 22:47:38.205551403 +0000 UTC m=+4054.166798436" watchObservedRunningTime="2026-01-30 22:47:38.21098438 +0000 UTC m=+4054.172231403" Jan 30 22:47:38 crc kubenswrapper[4979]: I0130 22:47:38.241861 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:40 crc kubenswrapper[4979]: I0130 22:47:40.014164 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:41 crc kubenswrapper[4979]: I0130 22:47:41.211981 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bgb7v" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="registry-server" containerID="cri-o://3db7188101669d98aeea1cda01ca1c0f031711d41a8e5d6b6bb60560f0e05f79" gracePeriod=2 Jan 30 22:47:42 crc kubenswrapper[4979]: I0130 22:47:42.222163 4979 generic.go:334] "Generic (PLEG): container finished" podID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerID="3db7188101669d98aeea1cda01ca1c0f031711d41a8e5d6b6bb60560f0e05f79" exitCode=0 Jan 30 22:47:42 crc kubenswrapper[4979]: I0130 22:47:42.222643 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerDied","Data":"3db7188101669d98aeea1cda01ca1c0f031711d41a8e5d6b6bb60560f0e05f79"} Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.021279 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.137250 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content\") pod \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.137318 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities\") pod \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.137374 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57mcc\" (UniqueName: \"kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc\") pod \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\" (UID: \"db22aed9-7413-4d06-8b61-fb6f730cf1cc\") " Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.138915 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities" (OuterVolumeSpecName: "utilities") pod "db22aed9-7413-4d06-8b61-fb6f730cf1cc" (UID: "db22aed9-7413-4d06-8b61-fb6f730cf1cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.145127 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc" (OuterVolumeSpecName: "kube-api-access-57mcc") pod "db22aed9-7413-4d06-8b61-fb6f730cf1cc" (UID: "db22aed9-7413-4d06-8b61-fb6f730cf1cc"). InnerVolumeSpecName "kube-api-access-57mcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.193281 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db22aed9-7413-4d06-8b61-fb6f730cf1cc" (UID: "db22aed9-7413-4d06-8b61-fb6f730cf1cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.231435 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgb7v" event={"ID":"db22aed9-7413-4d06-8b61-fb6f730cf1cc","Type":"ContainerDied","Data":"2c8eee9870667e78df791eca9d462625a8b2ae9eab002a6e958a2d7adf4b6611"} Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.231487 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgb7v" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.231501 4979 scope.go:117] "RemoveContainer" containerID="3db7188101669d98aeea1cda01ca1c0f031711d41a8e5d6b6bb60560f0e05f79" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.238664 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.238690 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57mcc\" (UniqueName: \"kubernetes.io/projected/db22aed9-7413-4d06-8b61-fb6f730cf1cc-kube-api-access-57mcc\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.238698 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db22aed9-7413-4d06-8b61-fb6f730cf1cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.257874 4979 scope.go:117] "RemoveContainer" containerID="bee75989ae32b9e3da9cd5d54c7b52fae48857d4c521afab1b9f1195918e3919" Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.269007 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.274616 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bgb7v"] Jan 30 22:47:43 crc kubenswrapper[4979]: I0130 22:47:43.286032 4979 scope.go:117] "RemoveContainer" containerID="4283e036f0bfd880adf92b50ac2f32a4a2845dd240c425f041c8745290cf9cd6" Jan 30 22:47:45 crc kubenswrapper[4979]: I0130 22:47:45.083130 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" path="/var/lib/kubelet/pods/db22aed9-7413-4d06-8b61-fb6f730cf1cc/volumes" Jan 30 22:47:45 crc kubenswrapper[4979]: I0130 22:47:45.378547 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:45 crc kubenswrapper[4979]: I0130 22:47:45.378632 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:45 crc kubenswrapper[4979]: I0130 22:47:45.439437 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:46 crc kubenswrapper[4979]: I0130 22:47:46.300690 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:47 crc kubenswrapper[4979]: I0130 22:47:47.014883 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.267121 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kw66v" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="registry-server" containerID="cri-o://922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d" gracePeriod=2 Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.671141 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.818243 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content\") pod \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.818754 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpf27\" (UniqueName: \"kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27\") pod \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.818900 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities\") pod \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\" (UID: \"06f8e9b3-9b00-4fcb-ae98-1fac6314845e\") " Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.820092 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities" (OuterVolumeSpecName: "utilities") pod "06f8e9b3-9b00-4fcb-ae98-1fac6314845e" (UID: "06f8e9b3-9b00-4fcb-ae98-1fac6314845e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.824245 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27" (OuterVolumeSpecName: "kube-api-access-bpf27") pod "06f8e9b3-9b00-4fcb-ae98-1fac6314845e" (UID: "06f8e9b3-9b00-4fcb-ae98-1fac6314845e"). InnerVolumeSpecName "kube-api-access-bpf27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.867510 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06f8e9b3-9b00-4fcb-ae98-1fac6314845e" (UID: "06f8e9b3-9b00-4fcb-ae98-1fac6314845e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.920775 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpf27\" (UniqueName: \"kubernetes.io/projected/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-kube-api-access-bpf27\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.920816 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:48 crc kubenswrapper[4979]: I0130 22:47:48.920828 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f8e9b3-9b00-4fcb-ae98-1fac6314845e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.275988 4979 generic.go:334] "Generic (PLEG): container finished" podID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerID="922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d" exitCode=0 Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.276127 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kw66v" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.276158 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerDied","Data":"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d"} Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.277113 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kw66v" event={"ID":"06f8e9b3-9b00-4fcb-ae98-1fac6314845e","Type":"ContainerDied","Data":"ab59619a27c710eb68b79d0a064ccdbed30ed0efc3ed64a23d934642a11a4801"} Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.277196 4979 scope.go:117] "RemoveContainer" containerID="922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.302109 4979 scope.go:117] "RemoveContainer" containerID="31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.304569 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.312250 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kw66v"] Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.319959 4979 scope.go:117] "RemoveContainer" containerID="26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.361017 4979 scope.go:117] "RemoveContainer" containerID="922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d" Jan 30 22:47:49 crc kubenswrapper[4979]: E0130 22:47:49.361660 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d\": container with ID starting with 922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d not found: ID does not exist" containerID="922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.361712 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d"} err="failed to get container status \"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d\": rpc error: code = NotFound desc = could not find container \"922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d\": container with ID starting with 922f85d1eec283e4f96bb7b95dbef5cd1fdf3a67785031f0e5fb26876d1d684d not found: ID does not exist" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.361742 4979 scope.go:117] "RemoveContainer" containerID="31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a" Jan 30 22:47:49 crc kubenswrapper[4979]: E0130 22:47:49.362256 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a\": container with ID starting with 31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a not found: ID does not exist" containerID="31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.362308 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a"} err="failed to get container status \"31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a\": rpc error: code = NotFound desc = could not find container \"31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a\": container with ID starting with 31f78959c105e52ae1143a9a8cca66a0e5232a664fdc8c9e8d7a1e5d1f781e1a not found: ID does not exist" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.362326 4979 scope.go:117] "RemoveContainer" containerID="26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf" Jan 30 22:47:49 crc kubenswrapper[4979]: E0130 22:47:49.362624 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf\": container with ID starting with 26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf not found: ID does not exist" containerID="26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf" Jan 30 22:47:49 crc kubenswrapper[4979]: I0130 22:47:49.362649 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf"} err="failed to get container status \"26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf\": rpc error: code = NotFound desc = could not find container \"26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf\": container with ID starting with 26878f6a9a4818eecd1b36fdf54e8068157c6422c251950ebbe500a78ad996bf not found: ID does not exist" Jan 30 22:47:51 crc kubenswrapper[4979]: I0130 22:47:51.077640 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" path="/var/lib/kubelet/pods/06f8e9b3-9b00-4fcb-ae98-1fac6314845e/volumes" Jan 30 22:48:02 crc kubenswrapper[4979]: I0130 22:48:02.039376 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:48:02 crc kubenswrapper[4979]: I0130 22:48:02.040208 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.035402 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036481 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036505 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036524 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="extract-utilities" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036535 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="extract-utilities" Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036549 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="extract-content" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036563 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="extract-content" Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036577 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="extract-utilities" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036588 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="extract-utilities" Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036610 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="extract-content" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036621 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="extract-content" Jan 30 22:48:24 crc kubenswrapper[4979]: E0130 22:48:24.036641 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036651 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036875 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f8e9b3-9b00-4fcb-ae98-1fac6314845e" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.036908 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="db22aed9-7413-4d06-8b61-fb6f730cf1cc" containerName="registry-server" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.038487 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.045376 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.146646 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkngz\" (UniqueName: \"kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.146751 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.146781 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.247683 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.247732 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.247822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkngz\" (UniqueName: \"kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.248309 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.248377 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.770446 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkngz\" (UniqueName: \"kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz\") pod \"redhat-operators-t5h54\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:24 crc kubenswrapper[4979]: I0130 22:48:24.960057 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:25 crc kubenswrapper[4979]: I0130 22:48:25.470831 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:25 crc kubenswrapper[4979]: I0130 22:48:25.518989 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerStarted","Data":"0d28c1c09d1376ad3a53c665c8252e3a7d5a04a540cf91d15d8d747c76858a84"} Jan 30 22:48:26 crc kubenswrapper[4979]: I0130 22:48:26.528500 4979 generic.go:334] "Generic (PLEG): container finished" podID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerID="4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea" exitCode=0 Jan 30 22:48:26 crc kubenswrapper[4979]: I0130 22:48:26.528545 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerDied","Data":"4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea"} Jan 30 22:48:28 crc kubenswrapper[4979]: I0130 22:48:28.542947 4979 generic.go:334] "Generic (PLEG): container finished" podID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerID="51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9" exitCode=0 Jan 30 22:48:28 crc kubenswrapper[4979]: I0130 22:48:28.543071 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerDied","Data":"51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9"} Jan 30 22:48:29 crc kubenswrapper[4979]: I0130 22:48:29.551518 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerStarted","Data":"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844"} Jan 30 22:48:29 crc kubenswrapper[4979]: I0130 22:48:29.577121 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5h54" podStartSLOduration=2.813037983 podStartE2EDuration="5.577090269s" podCreationTimestamp="2026-01-30 22:48:24 +0000 UTC" firstStartedPulling="2026-01-30 22:48:26.530872751 +0000 UTC m=+4102.492119784" lastFinishedPulling="2026-01-30 22:48:29.294925037 +0000 UTC m=+4105.256172070" observedRunningTime="2026-01-30 22:48:29.572614468 +0000 UTC m=+4105.533861521" watchObservedRunningTime="2026-01-30 22:48:29.577090269 +0000 UTC m=+4105.538337312" Jan 30 22:48:32 crc kubenswrapper[4979]: I0130 22:48:32.039310 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:48:32 crc kubenswrapper[4979]: I0130 22:48:32.039755 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:48:34 crc kubenswrapper[4979]: I0130 22:48:34.960338 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:34 crc kubenswrapper[4979]: I0130 22:48:34.960794 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:35 crc kubenswrapper[4979]: I0130 22:48:35.999937 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5h54" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="registry-server" probeResult="failure" output=< Jan 30 22:48:35 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 22:48:35 crc kubenswrapper[4979]: > Jan 30 22:48:45 crc kubenswrapper[4979]: I0130 22:48:45.020636 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:45 crc kubenswrapper[4979]: I0130 22:48:45.094556 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:45 crc kubenswrapper[4979]: I0130 22:48:45.279054 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:46 crc kubenswrapper[4979]: I0130 22:48:46.696059 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t5h54" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="registry-server" containerID="cri-o://d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844" gracePeriod=2 Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.636350 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.704930 4979 generic.go:334] "Generic (PLEG): container finished" podID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerID="d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844" exitCode=0 Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.704996 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerDied","Data":"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844"} Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.705309 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5h54" event={"ID":"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f","Type":"ContainerDied","Data":"0d28c1c09d1376ad3a53c665c8252e3a7d5a04a540cf91d15d8d747c76858a84"} Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.705334 4979 scope.go:117] "RemoveContainer" containerID="d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.705020 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5h54" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.721345 4979 scope.go:117] "RemoveContainer" containerID="51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.739047 4979 scope.go:117] "RemoveContainer" containerID="4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.760936 4979 scope.go:117] "RemoveContainer" containerID="d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844" Jan 30 22:48:47 crc kubenswrapper[4979]: E0130 22:48:47.761339 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844\": container with ID starting with d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844 not found: ID does not exist" containerID="d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.761373 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844"} err="failed to get container status \"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844\": rpc error: code = NotFound desc = could not find container \"d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844\": container with ID starting with d1fe16061ea78bfffce60b0fef537202083615415dbee0d417fdaca8bdfb6844 not found: ID does not exist" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.761394 4979 scope.go:117] "RemoveContainer" containerID="51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9" Jan 30 22:48:47 crc kubenswrapper[4979]: E0130 22:48:47.761607 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9\": container with ID starting with 51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9 not found: ID does not exist" containerID="51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.761628 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9"} err="failed to get container status \"51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9\": rpc error: code = NotFound desc = could not find container \"51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9\": container with ID starting with 51b9c9cfd74d794d7dbd83cb5c3696e64ecabe0008e0ddaf131fa657a067abd9 not found: ID does not exist" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.761641 4979 scope.go:117] "RemoveContainer" containerID="4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea" Jan 30 22:48:47 crc kubenswrapper[4979]: E0130 22:48:47.761812 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea\": container with ID starting with 4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea not found: ID does not exist" containerID="4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.761830 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea"} err="failed to get container status \"4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea\": rpc error: code = NotFound desc = could not find container \"4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea\": container with ID starting with 4a3a87ba485ae6d31d4e9a83aaa9c0c3a8d99ea67cd24eb2c0adfe02d19eacea not found: ID does not exist" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.814429 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkngz\" (UniqueName: \"kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz\") pod \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.814567 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content\") pod \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.814637 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities\") pod \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\" (UID: \"0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f\") " Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.815868 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities" (OuterVolumeSpecName: "utilities") pod "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" (UID: "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.821145 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz" (OuterVolumeSpecName: "kube-api-access-rkngz") pod "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" (UID: "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f"). InnerVolumeSpecName "kube-api-access-rkngz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.916815 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkngz\" (UniqueName: \"kubernetes.io/projected/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-kube-api-access-rkngz\") on node \"crc\" DevicePath \"\"" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.916861 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:48:47 crc kubenswrapper[4979]: I0130 22:48:47.949581 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" (UID: "0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:48:48 crc kubenswrapper[4979]: I0130 22:48:48.017782 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:48:48 crc kubenswrapper[4979]: I0130 22:48:48.039480 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:48 crc kubenswrapper[4979]: I0130 22:48:48.046135 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t5h54"] Jan 30 22:48:49 crc kubenswrapper[4979]: I0130 22:48:49.082582 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" path="/var/lib/kubelet/pods/0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f/volumes" Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.040232 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.040703 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.040749 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.041508 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.041563 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e" gracePeriod=600 Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.834972 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e" exitCode=0 Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.835024 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e"} Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.835647 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856"} Jan 30 22:49:02 crc kubenswrapper[4979]: I0130 22:49:02.835677 4979 scope.go:117] "RemoveContainer" containerID="38c7654ef2f7ba9c4265358fd7ff57dec847f90c52c9e5ff65798c36f58716d9" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.542144 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:00 crc kubenswrapper[4979]: E0130 22:50:00.543347 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="extract-utilities" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.543366 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="extract-utilities" Jan 30 22:50:00 crc kubenswrapper[4979]: E0130 22:50:00.543389 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="registry-server" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.543409 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="registry-server" Jan 30 22:50:00 crc kubenswrapper[4979]: E0130 22:50:00.543436 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="extract-content" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.543444 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="extract-content" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.543618 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7199be-a5f7-43a5-ac5a-6d8dcbbcad0f" containerName="registry-server" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.545013 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.552629 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.598603 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njf59\" (UniqueName: \"kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.599070 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.599233 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.701340 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.701490 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njf59\" (UniqueName: \"kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.701527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.702231 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.702311 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.721842 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njf59\" (UniqueName: \"kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59\") pod \"redhat-marketplace-hszjp\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:00 crc kubenswrapper[4979]: I0130 22:50:00.885684 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:01 crc kubenswrapper[4979]: I0130 22:50:01.141892 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:01 crc kubenswrapper[4979]: I0130 22:50:01.289187 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerStarted","Data":"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677"} Jan 30 22:50:01 crc kubenswrapper[4979]: I0130 22:50:01.289239 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerStarted","Data":"21f0dc9adbd59b37846726239ed1298deaed53c89051139335e0150ee34b243c"} Jan 30 22:50:02 crc kubenswrapper[4979]: I0130 22:50:02.303436 4979 generic.go:334] "Generic (PLEG): container finished" podID="499781fa-40ab-4183-98f0-9ebb2907672d" containerID="6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677" exitCode=0 Jan 30 22:50:02 crc kubenswrapper[4979]: I0130 22:50:02.303549 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerDied","Data":"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677"} Jan 30 22:50:03 crc kubenswrapper[4979]: I0130 22:50:03.314249 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerStarted","Data":"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2"} Jan 30 22:50:04 crc kubenswrapper[4979]: I0130 22:50:04.330371 4979 generic.go:334] "Generic (PLEG): container finished" podID="499781fa-40ab-4183-98f0-9ebb2907672d" containerID="4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2" exitCode=0 Jan 30 22:50:04 crc kubenswrapper[4979]: I0130 22:50:04.330502 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerDied","Data":"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2"} Jan 30 22:50:06 crc kubenswrapper[4979]: I0130 22:50:06.350868 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerStarted","Data":"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3"} Jan 30 22:50:06 crc kubenswrapper[4979]: I0130 22:50:06.383669 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hszjp" podStartSLOduration=3.949433884 podStartE2EDuration="6.383650429s" podCreationTimestamp="2026-01-30 22:50:00 +0000 UTC" firstStartedPulling="2026-01-30 22:50:02.307715143 +0000 UTC m=+4198.268962196" lastFinishedPulling="2026-01-30 22:50:04.741931698 +0000 UTC m=+4200.703178741" observedRunningTime="2026-01-30 22:50:06.381622205 +0000 UTC m=+4202.342869248" watchObservedRunningTime="2026-01-30 22:50:06.383650429 +0000 UTC m=+4202.344897472" Jan 30 22:50:10 crc kubenswrapper[4979]: I0130 22:50:10.885865 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:10 crc kubenswrapper[4979]: I0130 22:50:10.886492 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:10 crc kubenswrapper[4979]: I0130 22:50:10.947638 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:11 crc kubenswrapper[4979]: I0130 22:50:11.477179 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:11 crc kubenswrapper[4979]: I0130 22:50:11.566340 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:13 crc kubenswrapper[4979]: I0130 22:50:13.414626 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hszjp" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="registry-server" containerID="cri-o://a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3" gracePeriod=2 Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.054053 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.171116 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content\") pod \"499781fa-40ab-4183-98f0-9ebb2907672d\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.171258 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities\") pod \"499781fa-40ab-4183-98f0-9ebb2907672d\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.171285 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njf59\" (UniqueName: \"kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59\") pod \"499781fa-40ab-4183-98f0-9ebb2907672d\" (UID: \"499781fa-40ab-4183-98f0-9ebb2907672d\") " Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.172666 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities" (OuterVolumeSpecName: "utilities") pod "499781fa-40ab-4183-98f0-9ebb2907672d" (UID: "499781fa-40ab-4183-98f0-9ebb2907672d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.182307 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59" (OuterVolumeSpecName: "kube-api-access-njf59") pod "499781fa-40ab-4183-98f0-9ebb2907672d" (UID: "499781fa-40ab-4183-98f0-9ebb2907672d"). InnerVolumeSpecName "kube-api-access-njf59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.201321 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "499781fa-40ab-4183-98f0-9ebb2907672d" (UID: "499781fa-40ab-4183-98f0-9ebb2907672d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.272950 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.273005 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njf59\" (UniqueName: \"kubernetes.io/projected/499781fa-40ab-4183-98f0-9ebb2907672d-kube-api-access-njf59\") on node \"crc\" DevicePath \"\"" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.273022 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499781fa-40ab-4183-98f0-9ebb2907672d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.428806 4979 generic.go:334] "Generic (PLEG): container finished" podID="499781fa-40ab-4183-98f0-9ebb2907672d" containerID="a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3" exitCode=0 Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.428878 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerDied","Data":"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3"} Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.428902 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hszjp" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.428924 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hszjp" event={"ID":"499781fa-40ab-4183-98f0-9ebb2907672d","Type":"ContainerDied","Data":"21f0dc9adbd59b37846726239ed1298deaed53c89051139335e0150ee34b243c"} Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.428952 4979 scope.go:117] "RemoveContainer" containerID="a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.465772 4979 scope.go:117] "RemoveContainer" containerID="4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.465895 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.470808 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hszjp"] Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.491164 4979 scope.go:117] "RemoveContainer" containerID="6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.517016 4979 scope.go:117] "RemoveContainer" containerID="a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3" Jan 30 22:50:14 crc kubenswrapper[4979]: E0130 22:50:14.517629 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3\": container with ID starting with a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3 not found: ID does not exist" containerID="a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.517901 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3"} err="failed to get container status \"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3\": rpc error: code = NotFound desc = could not find container \"a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3\": container with ID starting with a5627ce255c9545700a18855aed5a6871179b5df8eb3e0a5bd471bded5f8ddc3 not found: ID does not exist" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.518162 4979 scope.go:117] "RemoveContainer" containerID="4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2" Jan 30 22:50:14 crc kubenswrapper[4979]: E0130 22:50:14.518847 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2\": container with ID starting with 4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2 not found: ID does not exist" containerID="4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.518928 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2"} err="failed to get container status \"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2\": rpc error: code = NotFound desc = could not find container \"4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2\": container with ID starting with 4f41b4c4113212b056b54312b347d2da38039099a0f07eb77dceec6b5061bfc2 not found: ID does not exist" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.518983 4979 scope.go:117] "RemoveContainer" containerID="6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677" Jan 30 22:50:14 crc kubenswrapper[4979]: E0130 22:50:14.519475 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677\": container with ID starting with 6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677 not found: ID does not exist" containerID="6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677" Jan 30 22:50:14 crc kubenswrapper[4979]: I0130 22:50:14.519519 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677"} err="failed to get container status \"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677\": rpc error: code = NotFound desc = could not find container \"6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677\": container with ID starting with 6a5ce0a2a8112c0ab3619210628347fad94fb9558bbf51c627b3af1d719d2677 not found: ID does not exist" Jan 30 22:50:15 crc kubenswrapper[4979]: I0130 22:50:15.084878 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" path="/var/lib/kubelet/pods/499781fa-40ab-4183-98f0-9ebb2907672d/volumes" Jan 30 22:51:02 crc kubenswrapper[4979]: I0130 22:51:02.039670 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:51:02 crc kubenswrapper[4979]: I0130 22:51:02.040451 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:51:32 crc kubenswrapper[4979]: I0130 22:51:32.039697 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:51:32 crc kubenswrapper[4979]: I0130 22:51:32.040656 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.039218 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.039967 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.040018 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.040725 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.040800 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" gracePeriod=600 Jan 30 22:52:02 crc kubenswrapper[4979]: E0130 22:52:02.159155 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.265699 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" exitCode=0 Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.265740 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856"} Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.265784 4979 scope.go:117] "RemoveContainer" containerID="ba9860fb5e76e8b37e67c5dcfa291e9395710ff34773720960ef977de36e471e" Jan 30 22:52:02 crc kubenswrapper[4979]: I0130 22:52:02.266603 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:52:02 crc kubenswrapper[4979]: E0130 22:52:02.267128 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:52:15 crc kubenswrapper[4979]: I0130 22:52:15.079522 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:52:15 crc kubenswrapper[4979]: E0130 22:52:15.081953 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:52:29 crc kubenswrapper[4979]: I0130 22:52:29.069489 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:52:29 crc kubenswrapper[4979]: E0130 22:52:29.070257 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:52:40 crc kubenswrapper[4979]: I0130 22:52:40.070171 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:52:40 crc kubenswrapper[4979]: E0130 22:52:40.070939 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:52:51 crc kubenswrapper[4979]: I0130 22:52:51.070405 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:52:51 crc kubenswrapper[4979]: E0130 22:52:51.071568 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:53:03 crc kubenswrapper[4979]: I0130 22:53:03.070197 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:53:03 crc kubenswrapper[4979]: E0130 22:53:03.071593 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:53:18 crc kubenswrapper[4979]: I0130 22:53:18.070513 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:53:18 crc kubenswrapper[4979]: E0130 22:53:18.071313 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:53:32 crc kubenswrapper[4979]: I0130 22:53:32.069742 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:53:32 crc kubenswrapper[4979]: E0130 22:53:32.070823 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:53:45 crc kubenswrapper[4979]: I0130 22:53:45.079601 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:53:45 crc kubenswrapper[4979]: E0130 22:53:45.080570 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:53:58 crc kubenswrapper[4979]: I0130 22:53:58.070121 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:53:58 crc kubenswrapper[4979]: E0130 22:53:58.070866 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:54:13 crc kubenswrapper[4979]: I0130 22:54:13.069803 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:54:13 crc kubenswrapper[4979]: E0130 22:54:13.071016 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:54:27 crc kubenswrapper[4979]: I0130 22:54:27.069914 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:54:27 crc kubenswrapper[4979]: E0130 22:54:27.070752 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:54:38 crc kubenswrapper[4979]: I0130 22:54:38.070529 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:54:38 crc kubenswrapper[4979]: E0130 22:54:38.071365 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:54:52 crc kubenswrapper[4979]: I0130 22:54:52.069737 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:54:52 crc kubenswrapper[4979]: E0130 22:54:52.070829 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:55:06 crc kubenswrapper[4979]: I0130 22:55:06.069557 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:55:06 crc kubenswrapper[4979]: E0130 22:55:06.070690 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:55:21 crc kubenswrapper[4979]: I0130 22:55:21.070126 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:55:21 crc kubenswrapper[4979]: E0130 22:55:21.070977 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:55:33 crc kubenswrapper[4979]: I0130 22:55:33.070426 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:55:33 crc kubenswrapper[4979]: E0130 22:55:33.071775 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:55:47 crc kubenswrapper[4979]: I0130 22:55:47.070059 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:55:47 crc kubenswrapper[4979]: E0130 22:55:47.070827 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:56:00 crc kubenswrapper[4979]: I0130 22:56:00.069908 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:56:00 crc kubenswrapper[4979]: E0130 22:56:00.070563 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:56:14 crc kubenswrapper[4979]: I0130 22:56:14.070151 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:56:14 crc kubenswrapper[4979]: E0130 22:56:14.070808 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:56:28 crc kubenswrapper[4979]: I0130 22:56:28.069717 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:56:28 crc kubenswrapper[4979]: E0130 22:56:28.070728 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.231465 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-sr9vn"] Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.236999 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-sr9vn"] Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.354811 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-bws8q"] Jan 30 22:56:31 crc kubenswrapper[4979]: E0130 22:56:31.355414 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="extract-utilities" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.355498 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="extract-utilities" Jan 30 22:56:31 crc kubenswrapper[4979]: E0130 22:56:31.355569 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="registry-server" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.355628 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="registry-server" Jan 30 22:56:31 crc kubenswrapper[4979]: E0130 22:56:31.355704 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="extract-content" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.355759 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="extract-content" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.355928 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="499781fa-40ab-4183-98f0-9ebb2907672d" containerName="registry-server" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.356463 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.358435 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.358439 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.366483 4979 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-jpprx" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.366483 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.372638 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-bws8q"] Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.377849 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7htk\" (UniqueName: \"kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.377898 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.377938 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.478399 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7htk\" (UniqueName: \"kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.478438 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.478466 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.478780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.479118 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.504049 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7htk\" (UniqueName: \"kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk\") pod \"crc-storage-crc-bws8q\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:31 crc kubenswrapper[4979]: I0130 22:56:31.682739 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:32 crc kubenswrapper[4979]: I0130 22:56:32.095272 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-bws8q"] Jan 30 22:56:32 crc kubenswrapper[4979]: I0130 22:56:32.103390 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:56:32 crc kubenswrapper[4979]: I0130 22:56:32.411650 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-bws8q" event={"ID":"5cfa1ab3-8375-406f-8337-8bf16b0eca15","Type":"ContainerStarted","Data":"e6433d25883518b82c9d988c509f16f512f8e37c7dee620c5b63b7ddcb930dc9"} Jan 30 22:56:33 crc kubenswrapper[4979]: I0130 22:56:33.080047 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b164f6-7e71-4403-9598-6673cea6876e" path="/var/lib/kubelet/pods/55b164f6-7e71-4403-9598-6673cea6876e/volumes" Jan 30 22:56:33 crc kubenswrapper[4979]: I0130 22:56:33.420467 4979 generic.go:334] "Generic (PLEG): container finished" podID="5cfa1ab3-8375-406f-8337-8bf16b0eca15" containerID="f9b321201755262611e536dca11c7193aa5f320fa99f7da74aac970a57d934ef" exitCode=0 Jan 30 22:56:33 crc kubenswrapper[4979]: I0130 22:56:33.420563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-bws8q" event={"ID":"5cfa1ab3-8375-406f-8337-8bf16b0eca15","Type":"ContainerDied","Data":"f9b321201755262611e536dca11c7193aa5f320fa99f7da74aac970a57d934ef"} Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.778440 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.835282 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt\") pod \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.835429 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "5cfa1ab3-8375-406f-8337-8bf16b0eca15" (UID: "5cfa1ab3-8375-406f-8337-8bf16b0eca15"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.835466 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7htk\" (UniqueName: \"kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk\") pod \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.835628 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage\") pod \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\" (UID: \"5cfa1ab3-8375-406f-8337-8bf16b0eca15\") " Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.836019 4979 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5cfa1ab3-8375-406f-8337-8bf16b0eca15-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.840886 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk" (OuterVolumeSpecName: "kube-api-access-q7htk") pod "5cfa1ab3-8375-406f-8337-8bf16b0eca15" (UID: "5cfa1ab3-8375-406f-8337-8bf16b0eca15"). InnerVolumeSpecName "kube-api-access-q7htk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.853018 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "5cfa1ab3-8375-406f-8337-8bf16b0eca15" (UID: "5cfa1ab3-8375-406f-8337-8bf16b0eca15"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.937462 4979 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5cfa1ab3-8375-406f-8337-8bf16b0eca15-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:34 crc kubenswrapper[4979]: I0130 22:56:34.937499 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7htk\" (UniqueName: \"kubernetes.io/projected/5cfa1ab3-8375-406f-8337-8bf16b0eca15-kube-api-access-q7htk\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:35 crc kubenswrapper[4979]: I0130 22:56:35.444778 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-bws8q" event={"ID":"5cfa1ab3-8375-406f-8337-8bf16b0eca15","Type":"ContainerDied","Data":"e6433d25883518b82c9d988c509f16f512f8e37c7dee620c5b63b7ddcb930dc9"} Jan 30 22:56:35 crc kubenswrapper[4979]: I0130 22:56:35.445299 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6433d25883518b82c9d988c509f16f512f8e37c7dee620c5b63b7ddcb930dc9" Jan 30 22:56:35 crc kubenswrapper[4979]: I0130 22:56:35.444850 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bws8q" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.851254 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-bws8q"] Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.858063 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-bws8q"] Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.966414 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-q6qv6"] Jan 30 22:56:36 crc kubenswrapper[4979]: E0130 22:56:36.966774 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfa1ab3-8375-406f-8337-8bf16b0eca15" containerName="storage" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.966794 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfa1ab3-8375-406f-8337-8bf16b0eca15" containerName="storage" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.966999 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfa1ab3-8375-406f-8337-8bf16b0eca15" containerName="storage" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.967572 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.970395 4979 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-jpprx" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.971333 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.971575 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.973131 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 22:56:36 crc kubenswrapper[4979]: I0130 22:56:36.977808 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q6qv6"] Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.067449 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.067561 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knhbg\" (UniqueName: \"kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.067629 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.080093 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cfa1ab3-8375-406f-8337-8bf16b0eca15" path="/var/lib/kubelet/pods/5cfa1ab3-8375-406f-8337-8bf16b0eca15/volumes" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.169356 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knhbg\" (UniqueName: \"kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.169437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.169536 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.169860 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.170435 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.195382 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knhbg\" (UniqueName: \"kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg\") pod \"crc-storage-crc-q6qv6\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.292491 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:37 crc kubenswrapper[4979]: I0130 22:56:37.754778 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q6qv6"] Jan 30 22:56:38 crc kubenswrapper[4979]: I0130 22:56:38.468495 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q6qv6" event={"ID":"51e286a1-1a78-4074-83f5-967245b1c36a","Type":"ContainerStarted","Data":"fa5e251e45b390ee77b5b0149a7bf2c508aa1cbb4edd80741e9f9aecdfa56901"} Jan 30 22:56:39 crc kubenswrapper[4979]: I0130 22:56:39.479363 4979 generic.go:334] "Generic (PLEG): container finished" podID="51e286a1-1a78-4074-83f5-967245b1c36a" containerID="b532d569095e0ff5c9224950f17c01109c557dea11e198d44eead3dbf56c7594" exitCode=0 Jan 30 22:56:39 crc kubenswrapper[4979]: I0130 22:56:39.479464 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q6qv6" event={"ID":"51e286a1-1a78-4074-83f5-967245b1c36a","Type":"ContainerDied","Data":"b532d569095e0ff5c9224950f17c01109c557dea11e198d44eead3dbf56c7594"} Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.844852 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.935569 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage\") pod \"51e286a1-1a78-4074-83f5-967245b1c36a\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.935686 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt\") pod \"51e286a1-1a78-4074-83f5-967245b1c36a\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.935773 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knhbg\" (UniqueName: \"kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg\") pod \"51e286a1-1a78-4074-83f5-967245b1c36a\" (UID: \"51e286a1-1a78-4074-83f5-967245b1c36a\") " Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.936688 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "51e286a1-1a78-4074-83f5-967245b1c36a" (UID: "51e286a1-1a78-4074-83f5-967245b1c36a"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.946558 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg" (OuterVolumeSpecName: "kube-api-access-knhbg") pod "51e286a1-1a78-4074-83f5-967245b1c36a" (UID: "51e286a1-1a78-4074-83f5-967245b1c36a"). InnerVolumeSpecName "kube-api-access-knhbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:56:40 crc kubenswrapper[4979]: I0130 22:56:40.971305 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "51e286a1-1a78-4074-83f5-967245b1c36a" (UID: "51e286a1-1a78-4074-83f5-967245b1c36a"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.037742 4979 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/51e286a1-1a78-4074-83f5-967245b1c36a-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.037785 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knhbg\" (UniqueName: \"kubernetes.io/projected/51e286a1-1a78-4074-83f5-967245b1c36a-kube-api-access-knhbg\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.037795 4979 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/51e286a1-1a78-4074-83f5-967245b1c36a-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.069806 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:56:41 crc kubenswrapper[4979]: E0130 22:56:41.070351 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.501208 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q6qv6" event={"ID":"51e286a1-1a78-4074-83f5-967245b1c36a","Type":"ContainerDied","Data":"fa5e251e45b390ee77b5b0149a7bf2c508aa1cbb4edd80741e9f9aecdfa56901"} Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.501254 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa5e251e45b390ee77b5b0149a7bf2c508aa1cbb4edd80741e9f9aecdfa56901" Jan 30 22:56:41 crc kubenswrapper[4979]: I0130 22:56:41.501339 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q6qv6" Jan 30 22:56:54 crc kubenswrapper[4979]: I0130 22:56:54.069658 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:56:54 crc kubenswrapper[4979]: E0130 22:56:54.070549 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 22:57:07 crc kubenswrapper[4979]: I0130 22:57:07.069857 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 22:57:07 crc kubenswrapper[4979]: I0130 22:57:07.686464 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9"} Jan 30 22:57:25 crc kubenswrapper[4979]: I0130 22:57:25.785708 4979 scope.go:117] "RemoveContainer" containerID="f69e5e60ca65ac037198a7875cb73ae5dd60bb9ab12c82aead51159afd7e44ab" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.277127 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:32 crc kubenswrapper[4979]: E0130 22:57:32.278145 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e286a1-1a78-4074-83f5-967245b1c36a" containerName="storage" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.278158 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e286a1-1a78-4074-83f5-967245b1c36a" containerName="storage" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.278328 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e286a1-1a78-4074-83f5-967245b1c36a" containerName="storage" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.289260 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.331636 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.422380 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.422907 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.422991 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ccnw\" (UniqueName: \"kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.524400 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ccnw\" (UniqueName: \"kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.524471 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.524502 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.524931 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.525051 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.547169 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ccnw\" (UniqueName: \"kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw\") pod \"certified-operators-5ffgp\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:32 crc kubenswrapper[4979]: I0130 22:57:32.632640 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:33 crc kubenswrapper[4979]: I0130 22:57:33.126292 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:33 crc kubenswrapper[4979]: I0130 22:57:33.897962 4979 generic.go:334] "Generic (PLEG): container finished" podID="623675c5-9919-4674-b268-95d143a04fee" containerID="2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744" exitCode=0 Jan 30 22:57:33 crc kubenswrapper[4979]: I0130 22:57:33.898059 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerDied","Data":"2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744"} Jan 30 22:57:33 crc kubenswrapper[4979]: I0130 22:57:33.898339 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerStarted","Data":"c17c52095893b902e8ea8de1b64bac329fbf99d9059d027246fa472611bb55dc"} Jan 30 22:57:35 crc kubenswrapper[4979]: I0130 22:57:35.931921 4979 generic.go:334] "Generic (PLEG): container finished" podID="623675c5-9919-4674-b268-95d143a04fee" containerID="91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c" exitCode=0 Jan 30 22:57:35 crc kubenswrapper[4979]: I0130 22:57:35.932119 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerDied","Data":"91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c"} Jan 30 22:57:36 crc kubenswrapper[4979]: I0130 22:57:36.940468 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerStarted","Data":"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5"} Jan 30 22:57:36 crc kubenswrapper[4979]: I0130 22:57:36.962147 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5ffgp" podStartSLOduration=2.501053168 podStartE2EDuration="4.96212849s" podCreationTimestamp="2026-01-30 22:57:32 +0000 UTC" firstStartedPulling="2026-01-30 22:57:33.900181568 +0000 UTC m=+4649.861428631" lastFinishedPulling="2026-01-30 22:57:36.36125692 +0000 UTC m=+4652.322503953" observedRunningTime="2026-01-30 22:57:36.958400319 +0000 UTC m=+4652.919647372" watchObservedRunningTime="2026-01-30 22:57:36.96212849 +0000 UTC m=+4652.923375523" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.361622 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.365497 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.380454 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.474682 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.474826 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkdql\" (UniqueName: \"kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.475146 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.576822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.576889 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.576954 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkdql\" (UniqueName: \"kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.577714 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.577772 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.603422 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkdql\" (UniqueName: \"kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql\") pod \"community-operators-xvvr4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:41 crc kubenswrapper[4979]: I0130 22:57:41.694693 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.168215 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.633666 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.633999 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.674440 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.994974 4979 generic.go:334] "Generic (PLEG): container finished" podID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerID="992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995" exitCode=0 Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.995018 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerDied","Data":"992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995"} Jan 30 22:57:42 crc kubenswrapper[4979]: I0130 22:57:42.995090 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerStarted","Data":"637798bf8cb0e9717c3ac1817083cac1bf20c9222da9c74a0b8b70e0c5201c1c"} Jan 30 22:57:43 crc kubenswrapper[4979]: I0130 22:57:43.040609 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:44 crc kubenswrapper[4979]: I0130 22:57:44.004176 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerStarted","Data":"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67"} Jan 30 22:57:44 crc kubenswrapper[4979]: I0130 22:57:44.937307 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.015789 4979 generic.go:334] "Generic (PLEG): container finished" podID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerID="6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67" exitCode=0 Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.015856 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerDied","Data":"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67"} Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.016070 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5ffgp" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="registry-server" containerID="cri-o://7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5" gracePeriod=2 Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.726343 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.838698 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities\") pod \"623675c5-9919-4674-b268-95d143a04fee\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.838762 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ccnw\" (UniqueName: \"kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw\") pod \"623675c5-9919-4674-b268-95d143a04fee\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.838820 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content\") pod \"623675c5-9919-4674-b268-95d143a04fee\" (UID: \"623675c5-9919-4674-b268-95d143a04fee\") " Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.839631 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities" (OuterVolumeSpecName: "utilities") pod "623675c5-9919-4674-b268-95d143a04fee" (UID: "623675c5-9919-4674-b268-95d143a04fee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.844751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw" (OuterVolumeSpecName: "kube-api-access-4ccnw") pod "623675c5-9919-4674-b268-95d143a04fee" (UID: "623675c5-9919-4674-b268-95d143a04fee"). InnerVolumeSpecName "kube-api-access-4ccnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.941245 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:45 crc kubenswrapper[4979]: I0130 22:57:45.941271 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ccnw\" (UniqueName: \"kubernetes.io/projected/623675c5-9919-4674-b268-95d143a04fee-kube-api-access-4ccnw\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.023017 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerStarted","Data":"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f"} Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.024644 4979 generic.go:334] "Generic (PLEG): container finished" podID="623675c5-9919-4674-b268-95d143a04fee" containerID="7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5" exitCode=0 Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.024692 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerDied","Data":"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5"} Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.024725 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ffgp" event={"ID":"623675c5-9919-4674-b268-95d143a04fee","Type":"ContainerDied","Data":"c17c52095893b902e8ea8de1b64bac329fbf99d9059d027246fa472611bb55dc"} Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.024743 4979 scope.go:117] "RemoveContainer" containerID="7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.024782 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ffgp" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.040980 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xvvr4" podStartSLOduration=2.331165105 podStartE2EDuration="5.040967064s" podCreationTimestamp="2026-01-30 22:57:41 +0000 UTC" firstStartedPulling="2026-01-30 22:57:42.997138272 +0000 UTC m=+4658.958385305" lastFinishedPulling="2026-01-30 22:57:45.706940221 +0000 UTC m=+4661.668187264" observedRunningTime="2026-01-30 22:57:46.038646671 +0000 UTC m=+4661.999893704" watchObservedRunningTime="2026-01-30 22:57:46.040967064 +0000 UTC m=+4662.002214097" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.059284 4979 scope.go:117] "RemoveContainer" containerID="91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.076124 4979 scope.go:117] "RemoveContainer" containerID="2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.088575 4979 scope.go:117] "RemoveContainer" containerID="7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5" Jan 30 22:57:46 crc kubenswrapper[4979]: E0130 22:57:46.088940 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5\": container with ID starting with 7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5 not found: ID does not exist" containerID="7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.088995 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5"} err="failed to get container status \"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5\": rpc error: code = NotFound desc = could not find container \"7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5\": container with ID starting with 7fff297e933bfbec8ae0156e7a9bdf89c5c9cb14a7912ef1871e5fad791d8ec5 not found: ID does not exist" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.089049 4979 scope.go:117] "RemoveContainer" containerID="91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c" Jan 30 22:57:46 crc kubenswrapper[4979]: E0130 22:57:46.089334 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c\": container with ID starting with 91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c not found: ID does not exist" containerID="91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.089370 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c"} err="failed to get container status \"91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c\": rpc error: code = NotFound desc = could not find container \"91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c\": container with ID starting with 91ee9cdd360c4d5f128919ed0787eb2f8a6c1f03d88db3914ce63f6862d4365c not found: ID does not exist" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.089390 4979 scope.go:117] "RemoveContainer" containerID="2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744" Jan 30 22:57:46 crc kubenswrapper[4979]: E0130 22:57:46.089618 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744\": container with ID starting with 2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744 not found: ID does not exist" containerID="2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.089647 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744"} err="failed to get container status \"2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744\": rpc error: code = NotFound desc = could not find container \"2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744\": container with ID starting with 2ebdac47924de0a41a2a1e4c95bf8974de6611c55d06a94be9e7b3e7edc8d744 not found: ID does not exist" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.219425 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "623675c5-9919-4674-b268-95d143a04fee" (UID: "623675c5-9919-4674-b268-95d143a04fee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.247350 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623675c5-9919-4674-b268-95d143a04fee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.354818 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:46 crc kubenswrapper[4979]: I0130 22:57:46.360227 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5ffgp"] Jan 30 22:57:47 crc kubenswrapper[4979]: I0130 22:57:47.078642 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623675c5-9919-4674-b268-95d143a04fee" path="/var/lib/kubelet/pods/623675c5-9919-4674-b268-95d143a04fee/volumes" Jan 30 22:57:51 crc kubenswrapper[4979]: I0130 22:57:51.695606 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:51 crc kubenswrapper[4979]: I0130 22:57:51.695948 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:51 crc kubenswrapper[4979]: I0130 22:57:51.759915 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:52 crc kubenswrapper[4979]: I0130 22:57:52.131784 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:52 crc kubenswrapper[4979]: I0130 22:57:52.181903 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.094799 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xvvr4" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="registry-server" containerID="cri-o://7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f" gracePeriod=2 Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.526392 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.582622 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkdql\" (UniqueName: \"kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql\") pod \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.582727 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities\") pod \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.582811 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content\") pod \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\" (UID: \"97674aa1-34d3-4bb3-a4f5-31af8b1138c4\") " Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.584426 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities" (OuterVolumeSpecName: "utilities") pod "97674aa1-34d3-4bb3-a4f5-31af8b1138c4" (UID: "97674aa1-34d3-4bb3-a4f5-31af8b1138c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.589946 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql" (OuterVolumeSpecName: "kube-api-access-rkdql") pod "97674aa1-34d3-4bb3-a4f5-31af8b1138c4" (UID: "97674aa1-34d3-4bb3-a4f5-31af8b1138c4"). InnerVolumeSpecName "kube-api-access-rkdql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.634416 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97674aa1-34d3-4bb3-a4f5-31af8b1138c4" (UID: "97674aa1-34d3-4bb3-a4f5-31af8b1138c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.684465 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.684499 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkdql\" (UniqueName: \"kubernetes.io/projected/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-kube-api-access-rkdql\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:54 crc kubenswrapper[4979]: I0130 22:57:54.684509 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97674aa1-34d3-4bb3-a4f5-31af8b1138c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.102748 4979 generic.go:334] "Generic (PLEG): container finished" podID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerID="7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f" exitCode=0 Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.102805 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerDied","Data":"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f"} Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.102843 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvvr4" event={"ID":"97674aa1-34d3-4bb3-a4f5-31af8b1138c4","Type":"ContainerDied","Data":"637798bf8cb0e9717c3ac1817083cac1bf20c9222da9c74a0b8b70e0c5201c1c"} Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.102846 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvvr4" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.102862 4979 scope.go:117] "RemoveContainer" containerID="7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.124990 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.131292 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xvvr4"] Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.136214 4979 scope.go:117] "RemoveContainer" containerID="6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.167011 4979 scope.go:117] "RemoveContainer" containerID="992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.202098 4979 scope.go:117] "RemoveContainer" containerID="7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f" Jan 30 22:57:55 crc kubenswrapper[4979]: E0130 22:57:55.202745 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f\": container with ID starting with 7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f not found: ID does not exist" containerID="7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.202823 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f"} err="failed to get container status \"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f\": rpc error: code = NotFound desc = could not find container \"7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f\": container with ID starting with 7157cd4d788fbcb767953dff6775688b647be2a5417b5870b1172623e0b5922f not found: ID does not exist" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.202857 4979 scope.go:117] "RemoveContainer" containerID="6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67" Jan 30 22:57:55 crc kubenswrapper[4979]: E0130 22:57:55.203355 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67\": container with ID starting with 6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67 not found: ID does not exist" containerID="6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.203410 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67"} err="failed to get container status \"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67\": rpc error: code = NotFound desc = could not find container \"6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67\": container with ID starting with 6f499604367241aaed9726fc13cf12cb9313d11fd3c99622990e4eb49eb49f67 not found: ID does not exist" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.203438 4979 scope.go:117] "RemoveContainer" containerID="992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995" Jan 30 22:57:55 crc kubenswrapper[4979]: E0130 22:57:55.203920 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995\": container with ID starting with 992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995 not found: ID does not exist" containerID="992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995" Jan 30 22:57:55 crc kubenswrapper[4979]: I0130 22:57:55.203987 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995"} err="failed to get container status \"992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995\": rpc error: code = NotFound desc = could not find container \"992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995\": container with ID starting with 992f150fc71cf0bddd61f311462cfb87770a7d31f27e55b4da90da7875a8d995 not found: ID does not exist" Jan 30 22:57:57 crc kubenswrapper[4979]: I0130 22:57:57.084894 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" path="/var/lib/kubelet/pods/97674aa1-34d3-4bb3-a4f5-31af8b1138c4/volumes" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.488018 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.492519 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.492738 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.492844 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="extract-content" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.492980 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="extract-content" Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.493121 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="extract-content" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.493210 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="extract-content" Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.493300 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.493387 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.493484 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="extract-utilities" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.493794 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="extract-utilities" Jan 30 22:58:41 crc kubenswrapper[4979]: E0130 22:58:41.493901 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="extract-utilities" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.493993 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="extract-utilities" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.494315 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="97674aa1-34d3-4bb3-a4f5-31af8b1138c4" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.494453 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="623675c5-9919-4674-b268-95d143a04fee" containerName="registry-server" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.495707 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.503718 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.525151 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.525325 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.525356 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfhp9\" (UniqueName: \"kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.627886 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.627983 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfhp9\" (UniqueName: \"kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.628128 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.628431 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.629021 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.647209 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfhp9\" (UniqueName: \"kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9\") pod \"redhat-operators-tsvx6\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:41 crc kubenswrapper[4979]: I0130 22:58:41.815422 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:42 crc kubenswrapper[4979]: I0130 22:58:42.265705 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:42 crc kubenswrapper[4979]: I0130 22:58:42.504096 4979 generic.go:334] "Generic (PLEG): container finished" podID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerID="5d7c479a9e141b7e7a00eb0439d0f66d01bd5fba7f1b04c726e4be2b19adc583" exitCode=0 Jan 30 22:58:42 crc kubenswrapper[4979]: I0130 22:58:42.504175 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerDied","Data":"5d7c479a9e141b7e7a00eb0439d0f66d01bd5fba7f1b04c726e4be2b19adc583"} Jan 30 22:58:42 crc kubenswrapper[4979]: I0130 22:58:42.504221 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerStarted","Data":"5b94e80ea9248f79b7959c6c9c8e88281a22d40693a65524aab21567090ee50c"} Jan 30 22:58:44 crc kubenswrapper[4979]: I0130 22:58:44.519362 4979 generic.go:334] "Generic (PLEG): container finished" podID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerID="c083d9958969dba5413db9bda4338a29832e0b8f64a3b09ee91958c62054a311" exitCode=0 Jan 30 22:58:44 crc kubenswrapper[4979]: I0130 22:58:44.519597 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerDied","Data":"c083d9958969dba5413db9bda4338a29832e0b8f64a3b09ee91958c62054a311"} Jan 30 22:58:45 crc kubenswrapper[4979]: I0130 22:58:45.530219 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerStarted","Data":"9c88c69e7e24787983fe9f8f6bdb91d7255bf3bd801a31ead9096f7b1cf60a35"} Jan 30 22:58:45 crc kubenswrapper[4979]: I0130 22:58:45.549647 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tsvx6" podStartSLOduration=2.136289782 podStartE2EDuration="4.549629719s" podCreationTimestamp="2026-01-30 22:58:41 +0000 UTC" firstStartedPulling="2026-01-30 22:58:42.50590784 +0000 UTC m=+4718.467154873" lastFinishedPulling="2026-01-30 22:58:44.919247777 +0000 UTC m=+4720.880494810" observedRunningTime="2026-01-30 22:58:45.547764538 +0000 UTC m=+4721.509011571" watchObservedRunningTime="2026-01-30 22:58:45.549629719 +0000 UTC m=+4721.510876752" Jan 30 22:58:51 crc kubenswrapper[4979]: I0130 22:58:51.816467 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:51 crc kubenswrapper[4979]: I0130 22:58:51.817517 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:51 crc kubenswrapper[4979]: I0130 22:58:51.895898 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:52 crc kubenswrapper[4979]: I0130 22:58:52.618831 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:52 crc kubenswrapper[4979]: I0130 22:58:52.672715 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:54 crc kubenswrapper[4979]: I0130 22:58:54.927633 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tsvx6" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="registry-server" containerID="cri-o://9c88c69e7e24787983fe9f8f6bdb91d7255bf3bd801a31ead9096f7b1cf60a35" gracePeriod=2 Jan 30 22:58:55 crc kubenswrapper[4979]: I0130 22:58:55.935840 4979 generic.go:334] "Generic (PLEG): container finished" podID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerID="9c88c69e7e24787983fe9f8f6bdb91d7255bf3bd801a31ead9096f7b1cf60a35" exitCode=0 Jan 30 22:58:55 crc kubenswrapper[4979]: I0130 22:58:55.936072 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerDied","Data":"9c88c69e7e24787983fe9f8f6bdb91d7255bf3bd801a31ead9096f7b1cf60a35"} Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.394804 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.543064 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities\") pod \"c6e711e0-7edf-438f-b03e-5e8f786c3737\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.543153 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfhp9\" (UniqueName: \"kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9\") pod \"c6e711e0-7edf-438f-b03e-5e8f786c3737\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.543247 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content\") pod \"c6e711e0-7edf-438f-b03e-5e8f786c3737\" (UID: \"c6e711e0-7edf-438f-b03e-5e8f786c3737\") " Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.544172 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities" (OuterVolumeSpecName: "utilities") pod "c6e711e0-7edf-438f-b03e-5e8f786c3737" (UID: "c6e711e0-7edf-438f-b03e-5e8f786c3737"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.548738 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9" (OuterVolumeSpecName: "kube-api-access-tfhp9") pod "c6e711e0-7edf-438f-b03e-5e8f786c3737" (UID: "c6e711e0-7edf-438f-b03e-5e8f786c3737"). InnerVolumeSpecName "kube-api-access-tfhp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.644894 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.644995 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfhp9\" (UniqueName: \"kubernetes.io/projected/c6e711e0-7edf-438f-b03e-5e8f786c3737-kube-api-access-tfhp9\") on node \"crc\" DevicePath \"\"" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.703452 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6e711e0-7edf-438f-b03e-5e8f786c3737" (UID: "c6e711e0-7edf-438f-b03e-5e8f786c3737"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.746721 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e711e0-7edf-438f-b03e-5e8f786c3737-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.946940 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsvx6" event={"ID":"c6e711e0-7edf-438f-b03e-5e8f786c3737","Type":"ContainerDied","Data":"5b94e80ea9248f79b7959c6c9c8e88281a22d40693a65524aab21567090ee50c"} Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.946992 4979 scope.go:117] "RemoveContainer" containerID="9c88c69e7e24787983fe9f8f6bdb91d7255bf3bd801a31ead9096f7b1cf60a35" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.947208 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsvx6" Jan 30 22:58:56 crc kubenswrapper[4979]: I0130 22:58:56.973332 4979 scope.go:117] "RemoveContainer" containerID="c083d9958969dba5413db9bda4338a29832e0b8f64a3b09ee91958c62054a311" Jan 30 22:58:57 crc kubenswrapper[4979]: I0130 22:58:57.006976 4979 scope.go:117] "RemoveContainer" containerID="5d7c479a9e141b7e7a00eb0439d0f66d01bd5fba7f1b04c726e4be2b19adc583" Jan 30 22:58:57 crc kubenswrapper[4979]: I0130 22:58:57.024043 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:57 crc kubenswrapper[4979]: I0130 22:58:57.033468 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tsvx6"] Jan 30 22:58:57 crc kubenswrapper[4979]: I0130 22:58:57.086156 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" path="/var/lib/kubelet/pods/c6e711e0-7edf-438f-b03e-5e8f786c3737/volumes" Jan 30 22:59:32 crc kubenswrapper[4979]: I0130 22:59:32.039977 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:59:32 crc kubenswrapper[4979]: I0130 22:59:32.040619 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.700079 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 22:59:59 crc kubenswrapper[4979]: E0130 22:59:59.701173 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="extract-content" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.701191 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="extract-content" Jan 30 22:59:59 crc kubenswrapper[4979]: E0130 22:59:59.701228 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="registry-server" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.701235 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="registry-server" Jan 30 22:59:59 crc kubenswrapper[4979]: E0130 22:59:59.701250 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="extract-utilities" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.701258 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="extract-utilities" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.701419 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e711e0-7edf-438f-b03e-5e8f786c3737" containerName="registry-server" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.702336 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.705590 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.709318 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.710242 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.710322 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.710827 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-spztd" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.723583 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.812375 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.812434 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.812524 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcsch\" (UniqueName: \"kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.913142 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcsch\" (UniqueName: \"kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.913230 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.913253 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.914170 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.914219 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.944730 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcsch\" (UniqueName: \"kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch\") pod \"dnsmasq-dns-5d7b5456f5-92r6t\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.985008 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 22:59:59 crc kubenswrapper[4979]: I0130 22:59:59.986193 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.019377 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.046100 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.118316 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.118673 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.118695 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwd9z\" (UniqueName: \"kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.142083 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.142930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.149672 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.149915 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.154265 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221433 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221475 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwd9z\" (UniqueName: \"kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221539 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221560 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtrtk\" (UniqueName: \"kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221593 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.221630 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.222528 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.223466 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.243561 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwd9z\" (UniqueName: \"kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z\") pod \"dnsmasq-dns-98ddfc8f-nfzdr\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.303073 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.323265 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.323310 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtrtk\" (UniqueName: \"kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.323365 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.324552 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.327225 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.339576 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtrtk\" (UniqueName: \"kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk\") pod \"collect-profiles-29496900-jwp5z\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.413094 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 23:00:00 crc kubenswrapper[4979]: W0130 23:00:00.448908 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd04dc18f_4a9e_40c5_89af_d1a090d55f19.slice/crio-3e2cb32fb370bb97ddbebc750e2ab37ebd4efdbc941a49f692587625db435740 WatchSource:0}: Error finding container 3e2cb32fb370bb97ddbebc750e2ab37ebd4efdbc941a49f692587625db435740: Status 404 returned error can't find the container with id 3e2cb32fb370bb97ddbebc750e2ab37ebd4efdbc941a49f692587625db435740 Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.465307 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" event={"ID":"d04dc18f-4a9e-40c5-89af-d1a090d55f19","Type":"ContainerStarted","Data":"3e2cb32fb370bb97ddbebc750e2ab37ebd4efdbc941a49f692587625db435740"} Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.467530 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.768935 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.863760 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.866379 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.871944 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.872135 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.872247 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.872361 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-25ft5" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.873366 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.877492 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:00:00 crc kubenswrapper[4979]: I0130 23:00:00.955695 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z"] Jan 30 23:00:00 crc kubenswrapper[4979]: W0130 23:00:00.964096 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d53401_2853_4ace_84c5_621db486afe4.slice/crio-8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3 WatchSource:0}: Error finding container 8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3: Status 404 returned error can't find the container with id 8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3 Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.038650 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.038683 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.038707 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.038747 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.038883 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.039013 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bwc\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.039084 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.039116 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.039181 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140525 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140598 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140620 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6bwc\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140683 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140707 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140742 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140761 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.140785 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.141474 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.142300 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.142603 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.143003 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.147340 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.147461 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.148552 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.148621 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03d01eb65d8adc4d32a35137e4c958b2a45829d9b744b41c2b35ba94851c4723/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.152660 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.160899 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6bwc\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.189069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.205652 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.216721 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.236268 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.240374 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.240398 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.240652 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.240735 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.240850 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.241211 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vcf7n" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343582 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343839 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343867 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343885 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343934 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.343973 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.344013 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.344048 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6c68\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445749 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445805 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445829 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445849 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445869 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445887 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445907 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445947 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.445966 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6c68\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.446512 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.446658 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.447883 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.448501 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.451162 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.451340 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.451771 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.451804 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6649be050b7f075ba9ae655c5497b53ee628ceded131093e643c8c774a634b05/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.451971 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.463779 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6c68\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.478814 4979 generic.go:334] "Generic (PLEG): container finished" podID="b9d53401-2853-4ace-84c5-621db486afe4" containerID="21e6285a2d48c55e292d8fabf4f8ed164cdad4a9a3d4934a322f2d44ce65e551" exitCode=0 Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.478910 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" event={"ID":"b9d53401-2853-4ace-84c5-621db486afe4","Type":"ContainerDied","Data":"21e6285a2d48c55e292d8fabf4f8ed164cdad4a9a3d4934a322f2d44ce65e551"} Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.478938 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" event={"ID":"b9d53401-2853-4ace-84c5-621db486afe4","Type":"ContainerStarted","Data":"8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3"} Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.480642 4979 generic.go:334] "Generic (PLEG): container finished" podID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerID="e92999cfafdeac8211d5158e0746bbde23f4c02a545a5abc5507e1fcf7782d7c" exitCode=0 Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.480751 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" event={"ID":"1d741010-36ef-41d3-8613-ab2d49cacfb7","Type":"ContainerDied","Data":"e92999cfafdeac8211d5158e0746bbde23f4c02a545a5abc5507e1fcf7782d7c"} Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.480800 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" event={"ID":"1d741010-36ef-41d3-8613-ab2d49cacfb7","Type":"ContainerStarted","Data":"994ef8ca363dd40c266610987d3ec533707b724f9ddcc04659cbb378e0bcd6ba"} Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.481140 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.483191 4979 generic.go:334] "Generic (PLEG): container finished" podID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerID="e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6" exitCode=0 Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.483217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" event={"ID":"d04dc18f-4a9e-40c5-89af-d1a090d55f19","Type":"ContainerDied","Data":"e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6"} Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.550794 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:00:01 crc kubenswrapper[4979]: I0130 23:00:01.564850 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.040145 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.040486 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.060980 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:00:02 crc kubenswrapper[4979]: W0130 23:00:02.066985 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdc62fc_7d4d_4f2a_9611_4011f302320a.slice/crio-77bba48b078db4e2a5d4fae60fd1fb07df7ec13057417d3ebfc0bf61c7d0d3fc WatchSource:0}: Error finding container 77bba48b078db4e2a5d4fae60fd1fb07df7ec13057417d3ebfc0bf61c7d0d3fc: Status 404 returned error can't find the container with id 77bba48b078db4e2a5d4fae60fd1fb07df7ec13057417d3ebfc0bf61c7d0d3fc Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.385304 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.387530 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.390409 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-q74gm" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.391348 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.391495 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.391696 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.404681 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.404969 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.494346 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" event={"ID":"1d741010-36ef-41d3-8613-ab2d49cacfb7","Type":"ContainerStarted","Data":"41d3a38d06953cb053d19bb002e45d19aa343a9b5dcd77d6e2762bf010b0a059"} Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.494689 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.497266 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" event={"ID":"d04dc18f-4a9e-40c5-89af-d1a090d55f19","Type":"ContainerStarted","Data":"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef"} Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.497527 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.498974 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerStarted","Data":"a401bb2823a23f538ee4aeaa4f20fe114ad58881d1e7c333038a2c3b643757b1"} Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.500764 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerStarted","Data":"77bba48b078db4e2a5d4fae60fd1fb07df7ec13057417d3ebfc0bf61c7d0d3fc"} Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.554112 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" podStartSLOduration=3.554084926 podStartE2EDuration="3.554084926s" podCreationTimestamp="2026-01-30 22:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:02.53936805 +0000 UTC m=+4798.500615083" watchObservedRunningTime="2026-01-30 23:00:02.554084926 +0000 UTC m=+4798.515331969" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.564941 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.565543 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-kolla-config\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.566501 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-75641116-471e-41cf-8659-4927e6f9165e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75641116-471e-41cf-8659-4927e6f9165e\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.566856 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.567065 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.583367 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpkcj\" (UniqueName: \"kubernetes.io/projected/20a89776-fed1-4db4-80e6-11cfdb8f810b-kube-api-access-vpkcj\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.583521 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.583584 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-default\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.585978 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" podStartSLOduration=3.585953053 podStartE2EDuration="3.585953053s" podCreationTimestamp="2026-01-30 22:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:02.585572823 +0000 UTC m=+4798.546819866" watchObservedRunningTime="2026-01-30 23:00:02.585953053 +0000 UTC m=+4798.547200096" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.685797 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.685876 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-default\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.685920 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.685961 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-kolla-config\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.686002 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-75641116-471e-41cf-8659-4927e6f9165e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75641116-471e-41cf-8659-4927e6f9165e\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.686055 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.686089 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.686128 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpkcj\" (UniqueName: \"kubernetes.io/projected/20a89776-fed1-4db4-80e6-11cfdb8f810b-kube-api-access-vpkcj\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.686980 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-kolla-config\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.687666 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.688358 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-config-data-default\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.689233 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a89776-fed1-4db4-80e6-11cfdb8f810b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.707841 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.708996 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.709153 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-75641116-471e-41cf-8659-4927e6f9165e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75641116-471e-41cf-8659-4927e6f9165e\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/78080cf084e20dcd3b8a9006ba9106db7dae3f598d9b707e2876adfc2da03006/globalmount\"" pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.715015 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a89776-fed1-4db4-80e6-11cfdb8f810b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.717258 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpkcj\" (UniqueName: \"kubernetes.io/projected/20a89776-fed1-4db4-80e6-11cfdb8f810b-kube-api-access-vpkcj\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.822392 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.828949 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.835758 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tj5p2" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.835956 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 23:00:02 crc kubenswrapper[4979]: I0130 23:00:02.854317 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.002851 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfbtn\" (UniqueName: \"kubernetes.io/projected/4a63b89d-496c-4f6e-8ba3-a18de60230af-kube-api-access-hfbtn\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.002930 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-kolla-config\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.003000 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-config-data\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.017798 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-75641116-471e-41cf-8659-4927e6f9165e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75641116-471e-41cf-8659-4927e6f9165e\") pod \"openstack-galera-0\" (UID: \"20a89776-fed1-4db4-80e6-11cfdb8f810b\") " pod="openstack/openstack-galera-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.049578 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.106361 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfbtn\" (UniqueName: \"kubernetes.io/projected/4a63b89d-496c-4f6e-8ba3-a18de60230af-kube-api-access-hfbtn\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.106441 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-kolla-config\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.106508 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-config-data\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.108248 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-config-data\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.108614 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4a63b89d-496c-4f6e-8ba3-a18de60230af-kolla-config\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.130102 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfbtn\" (UniqueName: \"kubernetes.io/projected/4a63b89d-496c-4f6e-8ba3-a18de60230af-kube-api-access-hfbtn\") pod \"memcached-0\" (UID: \"4a63b89d-496c-4f6e-8ba3-a18de60230af\") " pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.155224 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.208122 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtrtk\" (UniqueName: \"kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk\") pod \"b9d53401-2853-4ace-84c5-621db486afe4\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.208391 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume\") pod \"b9d53401-2853-4ace-84c5-621db486afe4\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.208433 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume\") pod \"b9d53401-2853-4ace-84c5-621db486afe4\" (UID: \"b9d53401-2853-4ace-84c5-621db486afe4\") " Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.210390 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume" (OuterVolumeSpecName: "config-volume") pod "b9d53401-2853-4ace-84c5-621db486afe4" (UID: "b9d53401-2853-4ace-84c5-621db486afe4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.215184 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b9d53401-2853-4ace-84c5-621db486afe4" (UID: "b9d53401-2853-4ace-84c5-621db486afe4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.215620 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk" (OuterVolumeSpecName: "kube-api-access-dtrtk") pod "b9d53401-2853-4ace-84c5-621db486afe4" (UID: "b9d53401-2853-4ace-84c5-621db486afe4"). InnerVolumeSpecName "kube-api-access-dtrtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.307721 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.310767 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d53401-2853-4ace-84c5-621db486afe4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.310787 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9d53401-2853-4ace-84c5-621db486afe4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.310797 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtrtk\" (UniqueName: \"kubernetes.io/projected/b9d53401-2853-4ace-84c5-621db486afe4-kube-api-access-dtrtk\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.395938 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.514808 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4a63b89d-496c-4f6e-8ba3-a18de60230af","Type":"ContainerStarted","Data":"de144da18fc6ad1e6da77fb3483d80cc8a6e01414222c71e9a594243e56e42c3"} Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.516937 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.517229 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-jwp5z" event={"ID":"b9d53401-2853-4ace-84c5-621db486afe4","Type":"ContainerDied","Data":"8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3"} Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.517277 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d891a149bc55bf66fa9ac0c063ff10bfafef7c9840d1f38af02be89ad05e8a3" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.518482 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerStarted","Data":"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69"} Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.520100 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerStarted","Data":"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5"} Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.786400 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 23:00:03 crc kubenswrapper[4979]: W0130 23:00:03.791664 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20a89776_fed1_4db4_80e6_11cfdb8f810b.slice/crio-15fcce142e41d2ca9e9f827fe7c4702d0330a21de52a09523184d5536a04dff8 WatchSource:0}: Error finding container 15fcce142e41d2ca9e9f827fe7c4702d0330a21de52a09523184d5536a04dff8: Status 404 returned error can't find the container with id 15fcce142e41d2ca9e9f827fe7c4702d0330a21de52a09523184d5536a04dff8 Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.959819 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 23:00:03 crc kubenswrapper[4979]: E0130 23:00:03.960135 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d53401-2853-4ace-84c5-621db486afe4" containerName="collect-profiles" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.960153 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d53401-2853-4ace-84c5-621db486afe4" containerName="collect-profiles" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.960310 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d53401-2853-4ace-84c5-621db486afe4" containerName="collect-profiles" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.961046 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.964266 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.964583 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.964797 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8dl6r" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.964920 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 23:00:03 crc kubenswrapper[4979]: I0130 23:00:03.977203 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.113203 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x"] Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.117744 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-kz88x"] Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120229 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp6s4\" (UniqueName: \"kubernetes.io/projected/7dad08bf-c93b-417a-aeef-633e774fffcc-kube-api-access-sp6s4\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120266 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120295 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120326 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120349 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120401 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120436 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99782168-e91f-40ac-9aee-efb58898ed33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99782168-e91f-40ac-9aee-efb58898ed33\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.120454 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222294 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222356 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222460 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222494 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99782168-e91f-40ac-9aee-efb58898ed33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99782168-e91f-40ac-9aee-efb58898ed33\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222515 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222556 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp6s4\" (UniqueName: \"kubernetes.io/projected/7dad08bf-c93b-417a-aeef-633e774fffcc-kube-api-access-sp6s4\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222577 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.222617 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.224216 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.224449 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.224999 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.225415 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad08bf-c93b-417a-aeef-633e774fffcc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.225795 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.225820 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99782168-e91f-40ac-9aee-efb58898ed33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99782168-e91f-40ac-9aee-efb58898ed33\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/765b5e341727e84e0cabb13f32abb0e1618fe43994bb507754efecb61045210c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.230475 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.232961 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dad08bf-c93b-417a-aeef-633e774fffcc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.243628 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp6s4\" (UniqueName: \"kubernetes.io/projected/7dad08bf-c93b-417a-aeef-633e774fffcc-kube-api-access-sp6s4\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.256542 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99782168-e91f-40ac-9aee-efb58898ed33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99782168-e91f-40ac-9aee-efb58898ed33\") pod \"openstack-cell1-galera-0\" (UID: \"7dad08bf-c93b-417a-aeef-633e774fffcc\") " pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.321302 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.530104 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4a63b89d-496c-4f6e-8ba3-a18de60230af","Type":"ContainerStarted","Data":"848bbed8593996688ca0963310f58dd25f1a5ce8fa6b6b4bfd577b5bbec4167d"} Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.530493 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.533590 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"20a89776-fed1-4db4-80e6-11cfdb8f810b","Type":"ContainerStarted","Data":"7c9333f0e52c67c6d628c18c4ab9c918def259767c8171fa934338c1261af124"} Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.533645 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"20a89776-fed1-4db4-80e6-11cfdb8f810b","Type":"ContainerStarted","Data":"15fcce142e41d2ca9e9f827fe7c4702d0330a21de52a09523184d5536a04dff8"} Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.562429 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.562408802 podStartE2EDuration="2.562408802s" podCreationTimestamp="2026-01-30 23:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:04.558475576 +0000 UTC m=+4800.519722619" watchObservedRunningTime="2026-01-30 23:00:04.562408802 +0000 UTC m=+4800.523655835" Jan 30 23:00:04 crc kubenswrapper[4979]: I0130 23:00:04.745182 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 23:00:04 crc kubenswrapper[4979]: W0130 23:00:04.749950 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dad08bf_c93b_417a_aeef_633e774fffcc.slice/crio-8165e94581e4e34763f2cfba54f084084255be4fc625031281a8e5c0a1b96569 WatchSource:0}: Error finding container 8165e94581e4e34763f2cfba54f084084255be4fc625031281a8e5c0a1b96569: Status 404 returned error can't find the container with id 8165e94581e4e34763f2cfba54f084084255be4fc625031281a8e5c0a1b96569 Jan 30 23:00:05 crc kubenswrapper[4979]: I0130 23:00:05.084208 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8" path="/var/lib/kubelet/pods/4d3ca92a-4066-4d9d-bd64-f0cd2a6583f8/volumes" Jan 30 23:00:05 crc kubenswrapper[4979]: I0130 23:00:05.542272 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dad08bf-c93b-417a-aeef-633e774fffcc","Type":"ContainerStarted","Data":"f1b4f2b54268550994dacc30fe9d7d0a5581aad8d3a5f1469657727a62239b36"} Jan 30 23:00:05 crc kubenswrapper[4979]: I0130 23:00:05.542335 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dad08bf-c93b-417a-aeef-633e774fffcc","Type":"ContainerStarted","Data":"8165e94581e4e34763f2cfba54f084084255be4fc625031281a8e5c0a1b96569"} Jan 30 23:00:08 crc kubenswrapper[4979]: I0130 23:00:08.156878 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 23:00:08 crc kubenswrapper[4979]: I0130 23:00:08.563367 4979 generic.go:334] "Generic (PLEG): container finished" podID="7dad08bf-c93b-417a-aeef-633e774fffcc" containerID="f1b4f2b54268550994dacc30fe9d7d0a5581aad8d3a5f1469657727a62239b36" exitCode=0 Jan 30 23:00:08 crc kubenswrapper[4979]: I0130 23:00:08.563489 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dad08bf-c93b-417a-aeef-633e774fffcc","Type":"ContainerDied","Data":"f1b4f2b54268550994dacc30fe9d7d0a5581aad8d3a5f1469657727a62239b36"} Jan 30 23:00:08 crc kubenswrapper[4979]: I0130 23:00:08.565139 4979 generic.go:334] "Generic (PLEG): container finished" podID="20a89776-fed1-4db4-80e6-11cfdb8f810b" containerID="7c9333f0e52c67c6d628c18c4ab9c918def259767c8171fa934338c1261af124" exitCode=0 Jan 30 23:00:08 crc kubenswrapper[4979]: I0130 23:00:08.565195 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"20a89776-fed1-4db4-80e6-11cfdb8f810b","Type":"ContainerDied","Data":"7c9333f0e52c67c6d628c18c4ab9c918def259767c8171fa934338c1261af124"} Jan 30 23:00:09 crc kubenswrapper[4979]: I0130 23:00:09.576627 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dad08bf-c93b-417a-aeef-633e774fffcc","Type":"ContainerStarted","Data":"6e0298f58e58526dc18bb81924423207d305cab9bac291d9db544286fae44088"} Jan 30 23:00:09 crc kubenswrapper[4979]: I0130 23:00:09.580976 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"20a89776-fed1-4db4-80e6-11cfdb8f810b","Type":"ContainerStarted","Data":"dd49143ef0315529344db293049bb4fd2535a07a43cd4e524a6e26284e67964c"} Jan 30 23:00:09 crc kubenswrapper[4979]: I0130 23:00:09.632918 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.632899443 podStartE2EDuration="7.632899443s" podCreationTimestamp="2026-01-30 23:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:09.604692814 +0000 UTC m=+4805.565939947" watchObservedRunningTime="2026-01-30 23:00:09.632899443 +0000 UTC m=+4805.594146476" Jan 30 23:00:09 crc kubenswrapper[4979]: I0130 23:00:09.634009 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.634004333 podStartE2EDuration="8.634004333s" podCreationTimestamp="2026-01-30 23:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:09.629392298 +0000 UTC m=+4805.590639331" watchObservedRunningTime="2026-01-30 23:00:09.634004333 +0000 UTC m=+4805.595251366" Jan 30 23:00:10 crc kubenswrapper[4979]: I0130 23:00:10.021395 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 23:00:10 crc kubenswrapper[4979]: I0130 23:00:10.305261 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:00:10 crc kubenswrapper[4979]: I0130 23:00:10.375426 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 23:00:10 crc kubenswrapper[4979]: I0130 23:00:10.587769 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="dnsmasq-dns" containerID="cri-o://ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef" gracePeriod=10 Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.068624 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.171497 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcsch\" (UniqueName: \"kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch\") pod \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.171566 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc\") pod \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.171732 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config\") pod \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\" (UID: \"d04dc18f-4a9e-40c5-89af-d1a090d55f19\") " Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.183962 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch" (OuterVolumeSpecName: "kube-api-access-tcsch") pod "d04dc18f-4a9e-40c5-89af-d1a090d55f19" (UID: "d04dc18f-4a9e-40c5-89af-d1a090d55f19"). InnerVolumeSpecName "kube-api-access-tcsch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.209952 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config" (OuterVolumeSpecName: "config") pod "d04dc18f-4a9e-40c5-89af-d1a090d55f19" (UID: "d04dc18f-4a9e-40c5-89af-d1a090d55f19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.212533 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d04dc18f-4a9e-40c5-89af-d1a090d55f19" (UID: "d04dc18f-4a9e-40c5-89af-d1a090d55f19"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.273185 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcsch\" (UniqueName: \"kubernetes.io/projected/d04dc18f-4a9e-40c5-89af-d1a090d55f19-kube-api-access-tcsch\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.273236 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.273245 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04dc18f-4a9e-40c5-89af-d1a090d55f19-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.606875 4979 generic.go:334] "Generic (PLEG): container finished" podID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerID="ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef" exitCode=0 Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.606983 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" event={"ID":"d04dc18f-4a9e-40c5-89af-d1a090d55f19","Type":"ContainerDied","Data":"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef"} Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.606999 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.608298 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-92r6t" event={"ID":"d04dc18f-4a9e-40c5-89af-d1a090d55f19","Type":"ContainerDied","Data":"3e2cb32fb370bb97ddbebc750e2ab37ebd4efdbc941a49f692587625db435740"} Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.608333 4979 scope.go:117] "RemoveContainer" containerID="ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.651941 4979 scope.go:117] "RemoveContainer" containerID="e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.652958 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.657915 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-92r6t"] Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.672473 4979 scope.go:117] "RemoveContainer" containerID="ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef" Jan 30 23:00:11 crc kubenswrapper[4979]: E0130 23:00:11.673275 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef\": container with ID starting with ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef not found: ID does not exist" containerID="ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.673304 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef"} err="failed to get container status \"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef\": rpc error: code = NotFound desc = could not find container \"ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef\": container with ID starting with ad55b1521b027917416c87f693bb6b3f041c08e0c5cf196bc67347a13d720fef not found: ID does not exist" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.673326 4979 scope.go:117] "RemoveContainer" containerID="e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6" Jan 30 23:00:11 crc kubenswrapper[4979]: E0130 23:00:11.673762 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6\": container with ID starting with e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6 not found: ID does not exist" containerID="e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6" Jan 30 23:00:11 crc kubenswrapper[4979]: I0130 23:00:11.673782 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6"} err="failed to get container status \"e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6\": rpc error: code = NotFound desc = could not find container \"e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6\": container with ID starting with e602b266aab05638f13c1684d3af33a290c658062aec1105059be51e10f91af6 not found: ID does not exist" Jan 30 23:00:12 crc kubenswrapper[4979]: E0130 23:00:12.203211 4979 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.143:57152->38.102.83.143:38353: read tcp 38.102.83.143:57152->38.102.83.143:38353: read: connection reset by peer Jan 30 23:00:13 crc kubenswrapper[4979]: I0130 23:00:13.087345 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" path="/var/lib/kubelet/pods/d04dc18f-4a9e-40c5-89af-d1a090d55f19/volumes" Jan 30 23:00:13 crc kubenswrapper[4979]: I0130 23:00:13.309172 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 23:00:13 crc kubenswrapper[4979]: I0130 23:00:13.309376 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 23:00:13 crc kubenswrapper[4979]: I0130 23:00:13.411799 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 23:00:13 crc kubenswrapper[4979]: I0130 23:00:13.717133 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 23:00:14 crc kubenswrapper[4979]: I0130 23:00:14.321847 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:14 crc kubenswrapper[4979]: I0130 23:00:14.321890 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:16 crc kubenswrapper[4979]: I0130 23:00:16.633216 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:16 crc kubenswrapper[4979]: I0130 23:00:16.703869 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.330627 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pzcvl"] Jan 30 23:00:21 crc kubenswrapper[4979]: E0130 23:00:21.331556 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="init" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.331586 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="init" Jan 30 23:00:21 crc kubenswrapper[4979]: E0130 23:00:21.331621 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="dnsmasq-dns" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.331634 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="dnsmasq-dns" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.331892 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04dc18f-4a9e-40c5-89af-d1a090d55f19" containerName="dnsmasq-dns" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.332998 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.335882 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.346699 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pzcvl"] Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.438432 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.438499 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltdqv\" (UniqueName: \"kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.540880 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.540957 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltdqv\" (UniqueName: \"kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.541985 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.559421 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltdqv\" (UniqueName: \"kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv\") pod \"root-account-create-update-pzcvl\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:21 crc kubenswrapper[4979]: I0130 23:00:21.649712 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:22 crc kubenswrapper[4979]: I0130 23:00:22.149188 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pzcvl"] Jan 30 23:00:22 crc kubenswrapper[4979]: W0130 23:00:22.153192 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e6c86a3_68af_49ab_9829_7bbe8fc0b0ba.slice/crio-6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d WatchSource:0}: Error finding container 6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d: Status 404 returned error can't find the container with id 6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d Jan 30 23:00:22 crc kubenswrapper[4979]: I0130 23:00:22.700176 4979 generic.go:334] "Generic (PLEG): container finished" podID="4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" containerID="cf41beecee7e20e5f9c898a20d24e379b95780b90e8224537101891074538c0d" exitCode=0 Jan 30 23:00:22 crc kubenswrapper[4979]: I0130 23:00:22.700242 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pzcvl" event={"ID":"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba","Type":"ContainerDied","Data":"cf41beecee7e20e5f9c898a20d24e379b95780b90e8224537101891074538c0d"} Jan 30 23:00:22 crc kubenswrapper[4979]: I0130 23:00:22.700562 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pzcvl" event={"ID":"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba","Type":"ContainerStarted","Data":"6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d"} Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.054471 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.191500 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltdqv\" (UniqueName: \"kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv\") pod \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.192429 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts\") pod \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\" (UID: \"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba\") " Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.193191 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" (UID: "4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.193639 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.198324 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv" (OuterVolumeSpecName: "kube-api-access-ltdqv") pod "4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" (UID: "4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba"). InnerVolumeSpecName "kube-api-access-ltdqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.295940 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltdqv\" (UniqueName: \"kubernetes.io/projected/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba-kube-api-access-ltdqv\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.718383 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pzcvl" event={"ID":"4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba","Type":"ContainerDied","Data":"6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d"} Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.718440 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fff511cac73ae39353f034c1bae13e45f374e7947c58de2ba3bd411d977a27d" Jan 30 23:00:24 crc kubenswrapper[4979]: I0130 23:00:24.718499 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pzcvl" Jan 30 23:00:26 crc kubenswrapper[4979]: I0130 23:00:26.023223 4979 scope.go:117] "RemoveContainer" containerID="3bbe88baa1620c36ba12ba04d5a8542170b476b0b0988530b1848eeba6a89780" Jan 30 23:00:27 crc kubenswrapper[4979]: I0130 23:00:27.952290 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pzcvl"] Jan 30 23:00:27 crc kubenswrapper[4979]: I0130 23:00:27.961136 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pzcvl"] Jan 30 23:00:29 crc kubenswrapper[4979]: I0130 23:00:29.090923 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" path="/var/lib/kubelet/pods/4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba/volumes" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.671777 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:30 crc kubenswrapper[4979]: E0130 23:00:30.673378 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" containerName="mariadb-account-create-update" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.673595 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" containerName="mariadb-account-create-update" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.673882 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6c86a3-68af-49ab-9829-7bbe8fc0b0ba" containerName="mariadb-account-create-update" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.694764 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.699672 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.809010 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.809113 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.809227 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2k6\" (UniqueName: \"kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.910587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.910695 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.910784 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2k6\" (UniqueName: \"kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.911069 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.911212 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:30 crc kubenswrapper[4979]: I0130 23:00:30.943627 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2k6\" (UniqueName: \"kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6\") pod \"redhat-marketplace-b72zd\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:31 crc kubenswrapper[4979]: I0130 23:00:31.039918 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:31 crc kubenswrapper[4979]: I0130 23:00:31.502699 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:31 crc kubenswrapper[4979]: I0130 23:00:31.776927 4979 generic.go:334] "Generic (PLEG): container finished" podID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerID="f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda" exitCode=0 Jan 30 23:00:31 crc kubenswrapper[4979]: I0130 23:00:31.777007 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerDied","Data":"f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda"} Jan 30 23:00:31 crc kubenswrapper[4979]: I0130 23:00:31.777473 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerStarted","Data":"3b4bbbd2f5f8d605423ee2168dd3e0a963546e4d187eae49e56f1c75e10714ce"} Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.039549 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.039648 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.039702 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.040439 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.040497 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9" gracePeriod=600 Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.789504 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9" exitCode=0 Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.789559 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9"} Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.792055 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6"} Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.792134 4979 scope.go:117] "RemoveContainer" containerID="10b9850f1c05e889e12a65ad1ea699bd399a2f8a5b3841a455a0325df6396856" Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.795483 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerStarted","Data":"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4"} Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.964957 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ntfjw"] Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.966129 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:32 crc kubenswrapper[4979]: I0130 23:00:32.970405 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.010397 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ntfjw"] Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.060067 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhst\" (UniqueName: \"kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.060118 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.161772 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhst\" (UniqueName: \"kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.161825 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.162897 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.182585 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhst\" (UniqueName: \"kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst\") pod \"root-account-create-update-ntfjw\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.317900 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.741922 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ntfjw"] Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.803487 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ntfjw" event={"ID":"579619ae-df83-40ff-8580-331060c16faf","Type":"ContainerStarted","Data":"e18d7827abd6930f15f3cea2c523dd9b6d4d86406411ab95da3f4fd12a3f6447"} Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.811651 4979 generic.go:334] "Generic (PLEG): container finished" podID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerID="e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4" exitCode=0 Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.811696 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerDied","Data":"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4"} Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.811723 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerStarted","Data":"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef"} Jan 30 23:00:33 crc kubenswrapper[4979]: I0130 23:00:33.836469 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b72zd" podStartSLOduration=2.121259997 podStartE2EDuration="3.83644787s" podCreationTimestamp="2026-01-30 23:00:30 +0000 UTC" firstStartedPulling="2026-01-30 23:00:31.7796045 +0000 UTC m=+4827.740851533" lastFinishedPulling="2026-01-30 23:00:33.494792383 +0000 UTC m=+4829.456039406" observedRunningTime="2026-01-30 23:00:33.835730471 +0000 UTC m=+4829.796977504" watchObservedRunningTime="2026-01-30 23:00:33.83644787 +0000 UTC m=+4829.797694903" Jan 30 23:00:34 crc kubenswrapper[4979]: I0130 23:00:34.820834 4979 generic.go:334] "Generic (PLEG): container finished" podID="579619ae-df83-40ff-8580-331060c16faf" containerID="2d0a143830dd73a91f1cb09ef9f3967be5ae0e4eb61c252cb0405d7e7fe00ec4" exitCode=0 Jan 30 23:00:34 crc kubenswrapper[4979]: I0130 23:00:34.820910 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ntfjw" event={"ID":"579619ae-df83-40ff-8580-331060c16faf","Type":"ContainerDied","Data":"2d0a143830dd73a91f1cb09ef9f3967be5ae0e4eb61c252cb0405d7e7fe00ec4"} Jan 30 23:00:34 crc kubenswrapper[4979]: I0130 23:00:34.822987 4979 generic.go:334] "Generic (PLEG): container finished" podID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerID="bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69" exitCode=0 Jan 30 23:00:34 crc kubenswrapper[4979]: I0130 23:00:34.823094 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerDied","Data":"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69"} Jan 30 23:00:35 crc kubenswrapper[4979]: I0130 23:00:35.832590 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerStarted","Data":"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398"} Jan 30 23:00:35 crc kubenswrapper[4979]: I0130 23:00:35.833527 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 23:00:35 crc kubenswrapper[4979]: I0130 23:00:35.834283 4979 generic.go:334] "Generic (PLEG): container finished" podID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerID="fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5" exitCode=0 Jan 30 23:00:35 crc kubenswrapper[4979]: I0130 23:00:35.834382 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerDied","Data":"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5"} Jan 30 23:00:35 crc kubenswrapper[4979]: I0130 23:00:35.858531 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.858496116 podStartE2EDuration="36.858496116s" podCreationTimestamp="2026-01-30 22:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:35.852927926 +0000 UTC m=+4831.814174959" watchObservedRunningTime="2026-01-30 23:00:35.858496116 +0000 UTC m=+4831.819743199" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.147379 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.247421 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lhst\" (UniqueName: \"kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst\") pod \"579619ae-df83-40ff-8580-331060c16faf\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.247470 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts\") pod \"579619ae-df83-40ff-8580-331060c16faf\" (UID: \"579619ae-df83-40ff-8580-331060c16faf\") " Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.248164 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "579619ae-df83-40ff-8580-331060c16faf" (UID: "579619ae-df83-40ff-8580-331060c16faf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.252465 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst" (OuterVolumeSpecName: "kube-api-access-2lhst") pod "579619ae-df83-40ff-8580-331060c16faf" (UID: "579619ae-df83-40ff-8580-331060c16faf"). InnerVolumeSpecName "kube-api-access-2lhst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.349093 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lhst\" (UniqueName: \"kubernetes.io/projected/579619ae-df83-40ff-8580-331060c16faf-kube-api-access-2lhst\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.349130 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/579619ae-df83-40ff-8580-331060c16faf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.845843 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerStarted","Data":"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea"} Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.846593 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.848335 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ntfjw" event={"ID":"579619ae-df83-40ff-8580-331060c16faf","Type":"ContainerDied","Data":"e18d7827abd6930f15f3cea2c523dd9b6d4d86406411ab95da3f4fd12a3f6447"} Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.848366 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18d7827abd6930f15f3cea2c523dd9b6d4d86406411ab95da3f4fd12a3f6447" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.848406 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ntfjw" Jan 30 23:00:36 crc kubenswrapper[4979]: I0130 23:00:36.878827 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.878803412 podStartE2EDuration="36.878803412s" podCreationTimestamp="2026-01-30 23:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:36.877439836 +0000 UTC m=+4832.838686869" watchObservedRunningTime="2026-01-30 23:00:36.878803412 +0000 UTC m=+4832.840050445" Jan 30 23:00:41 crc kubenswrapper[4979]: I0130 23:00:41.040680 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:41 crc kubenswrapper[4979]: I0130 23:00:41.042241 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:41 crc kubenswrapper[4979]: I0130 23:00:41.104514 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:41 crc kubenswrapper[4979]: I0130 23:00:41.967287 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:42 crc kubenswrapper[4979]: I0130 23:00:42.021896 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:43 crc kubenswrapper[4979]: I0130 23:00:43.918148 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b72zd" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="registry-server" containerID="cri-o://74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef" gracePeriod=2 Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.373171 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.477575 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content\") pod \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.477645 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities\") pod \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.477779 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z2k6\" (UniqueName: \"kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6\") pod \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\" (UID: \"76e9b909-c2fa-4a2c-b161-6ee2436ce983\") " Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.480225 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities" (OuterVolumeSpecName: "utilities") pod "76e9b909-c2fa-4a2c-b161-6ee2436ce983" (UID: "76e9b909-c2fa-4a2c-b161-6ee2436ce983"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.483274 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6" (OuterVolumeSpecName: "kube-api-access-5z2k6") pod "76e9b909-c2fa-4a2c-b161-6ee2436ce983" (UID: "76e9b909-c2fa-4a2c-b161-6ee2436ce983"). InnerVolumeSpecName "kube-api-access-5z2k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.511237 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76e9b909-c2fa-4a2c-b161-6ee2436ce983" (UID: "76e9b909-c2fa-4a2c-b161-6ee2436ce983"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.579815 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z2k6\" (UniqueName: \"kubernetes.io/projected/76e9b909-c2fa-4a2c-b161-6ee2436ce983-kube-api-access-5z2k6\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.579854 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.579866 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e9b909-c2fa-4a2c-b161-6ee2436ce983-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.928679 4979 generic.go:334] "Generic (PLEG): container finished" podID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerID="74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef" exitCode=0 Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.928760 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerDied","Data":"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef"} Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.928804 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72zd" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.928833 4979 scope.go:117] "RemoveContainer" containerID="74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.928811 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72zd" event={"ID":"76e9b909-c2fa-4a2c-b161-6ee2436ce983","Type":"ContainerDied","Data":"3b4bbbd2f5f8d605423ee2168dd3e0a963546e4d187eae49e56f1c75e10714ce"} Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.956568 4979 scope.go:117] "RemoveContainer" containerID="e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.986348 4979 scope.go:117] "RemoveContainer" containerID="f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda" Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.989066 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:44 crc kubenswrapper[4979]: I0130 23:00:44.998288 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72zd"] Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.026390 4979 scope.go:117] "RemoveContainer" containerID="74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef" Jan 30 23:00:45 crc kubenswrapper[4979]: E0130 23:00:45.027217 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef\": container with ID starting with 74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef not found: ID does not exist" containerID="74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.027293 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef"} err="failed to get container status \"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef\": rpc error: code = NotFound desc = could not find container \"74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef\": container with ID starting with 74e41d437f2f5fe3cfacd88b8e13a672fd41c40794d0314b1a4d8cad2492b8ef not found: ID does not exist" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.027337 4979 scope.go:117] "RemoveContainer" containerID="e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4" Jan 30 23:00:45 crc kubenswrapper[4979]: E0130 23:00:45.027793 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4\": container with ID starting with e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4 not found: ID does not exist" containerID="e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.027826 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4"} err="failed to get container status \"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4\": rpc error: code = NotFound desc = could not find container \"e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4\": container with ID starting with e27942138a6e261ec5de200a3c00ccfe729055b258c9305205ef22c08c6277e4 not found: ID does not exist" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.027847 4979 scope.go:117] "RemoveContainer" containerID="f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda" Jan 30 23:00:45 crc kubenswrapper[4979]: E0130 23:00:45.028131 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda\": container with ID starting with f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda not found: ID does not exist" containerID="f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.028149 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda"} err="failed to get container status \"f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda\": rpc error: code = NotFound desc = could not find container \"f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda\": container with ID starting with f37422fc2e99b3c3c2ff679e5cfa56a9e6a0c48e4208fbf760902fd2b4f4bcda not found: ID does not exist" Jan 30 23:00:45 crc kubenswrapper[4979]: I0130 23:00:45.086683 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" path="/var/lib/kubelet/pods/76e9b909-c2fa-4a2c-b161-6ee2436ce983/volumes" Jan 30 23:00:51 crc kubenswrapper[4979]: I0130 23:00:51.211433 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 23:00:51 crc kubenswrapper[4979]: I0130 23:00:51.569295 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.193159 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:00:55 crc kubenswrapper[4979]: E0130 23:00:55.193866 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="extract-content" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.193880 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="extract-content" Jan 30 23:00:55 crc kubenswrapper[4979]: E0130 23:00:55.193901 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="extract-utilities" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.193906 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="extract-utilities" Jan 30 23:00:55 crc kubenswrapper[4979]: E0130 23:00:55.193925 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="registry-server" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.193932 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="registry-server" Jan 30 23:00:55 crc kubenswrapper[4979]: E0130 23:00:55.193942 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579619ae-df83-40ff-8580-331060c16faf" containerName="mariadb-account-create-update" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.193949 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="579619ae-df83-40ff-8580-331060c16faf" containerName="mariadb-account-create-update" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.194148 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e9b909-c2fa-4a2c-b161-6ee2436ce983" containerName="registry-server" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.194157 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="579619ae-df83-40ff-8580-331060c16faf" containerName="mariadb-account-create-update" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.195218 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.204426 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.377258 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.377771 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.377950 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l5q5\" (UniqueName: \"kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.480299 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.480447 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l5q5\" (UniqueName: \"kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.480591 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.481167 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.481314 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.506637 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l5q5\" (UniqueName: \"kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5\") pod \"dnsmasq-dns-5b7946d7b9-lpljg\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.523389 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:55 crc kubenswrapper[4979]: I0130 23:00:55.563638 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:00:56 crc kubenswrapper[4979]: W0130 23:00:56.100196 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2795bb3d_be81_4873_96f6_6f3a42857827.slice/crio-89c2e0105cd91d45be0f9cf486bdd2b515115144c7c631fa1af7dbc2cbd8f36d WatchSource:0}: Error finding container 89c2e0105cd91d45be0f9cf486bdd2b515115144c7c631fa1af7dbc2cbd8f36d: Status 404 returned error can't find the container with id 89c2e0105cd91d45be0f9cf486bdd2b515115144c7c631fa1af7dbc2cbd8f36d Jan 30 23:00:56 crc kubenswrapper[4979]: I0130 23:00:56.103331 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:00:56 crc kubenswrapper[4979]: I0130 23:00:56.210219 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" event={"ID":"2795bb3d-be81-4873-96f6-6f3a42857827","Type":"ContainerStarted","Data":"89c2e0105cd91d45be0f9cf486bdd2b515115144c7c631fa1af7dbc2cbd8f36d"} Jan 30 23:00:56 crc kubenswrapper[4979]: I0130 23:00:56.488499 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:00:57 crc kubenswrapper[4979]: I0130 23:00:57.219111 4979 generic.go:334] "Generic (PLEG): container finished" podID="2795bb3d-be81-4873-96f6-6f3a42857827" containerID="ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1" exitCode=0 Jan 30 23:00:57 crc kubenswrapper[4979]: I0130 23:00:57.219159 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" event={"ID":"2795bb3d-be81-4873-96f6-6f3a42857827","Type":"ContainerDied","Data":"ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1"} Jan 30 23:00:57 crc kubenswrapper[4979]: I0130 23:00:57.684319 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="rabbitmq" containerID="cri-o://80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398" gracePeriod=604798 Jan 30 23:00:58 crc kubenswrapper[4979]: I0130 23:00:58.228450 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" event={"ID":"2795bb3d-be81-4873-96f6-6f3a42857827","Type":"ContainerStarted","Data":"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105"} Jan 30 23:00:58 crc kubenswrapper[4979]: I0130 23:00:58.228807 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:00:58 crc kubenswrapper[4979]: I0130 23:00:58.246836 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" podStartSLOduration=4.246817271 podStartE2EDuration="4.246817271s" podCreationTimestamp="2026-01-30 23:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:58.242109344 +0000 UTC m=+4854.203356377" watchObservedRunningTime="2026-01-30 23:00:58.246817271 +0000 UTC m=+4854.208064304" Jan 30 23:00:58 crc kubenswrapper[4979]: I0130 23:00:58.345386 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="rabbitmq" containerID="cri-o://2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea" gracePeriod=604799 Jan 30 23:01:01 crc kubenswrapper[4979]: I0130 23:01:01.207403 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.249:5672: connect: connection refused" Jan 30 23:01:01 crc kubenswrapper[4979]: I0130 23:01:01.567705 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.250:5672: connect: connection refused" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.250431 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258465 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258692 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258758 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6bwc\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258815 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258866 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258906 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258933 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258959 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.258996 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret\") pod \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\" (UID: \"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c\") " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.260154 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.260206 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.260452 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.275291 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.280193 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info" (OuterVolumeSpecName: "pod-info") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.281376 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc" (OuterVolumeSpecName: "kube-api-access-l6bwc") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "kube-api-access-l6bwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.299577 4979 generic.go:334] "Generic (PLEG): container finished" podID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerID="80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398" exitCode=0 Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.299635 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerDied","Data":"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398"} Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.299675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c","Type":"ContainerDied","Data":"a401bb2823a23f538ee4aeaa4f20fe114ad58881d1e7c333038a2c3b643757b1"} Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.299701 4979 scope.go:117] "RemoveContainer" containerID="80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.299915 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.308381 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf" (OuterVolumeSpecName: "server-conf") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362560 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6bwc\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-kube-api-access-l6bwc\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362594 4979 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362605 4979 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362615 4979 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362623 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362633 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.362643 4979 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.398393 4979 scope.go:117] "RemoveContainer" containerID="bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.411418 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb" (OuterVolumeSpecName: "persistence") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.414786 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" (UID: "4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.454192 4979 scope.go:117] "RemoveContainer" containerID="80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398" Jan 30 23:01:04 crc kubenswrapper[4979]: E0130 23:01:04.458127 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398\": container with ID starting with 80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398 not found: ID does not exist" containerID="80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.458168 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398"} err="failed to get container status \"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398\": rpc error: code = NotFound desc = could not find container \"80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398\": container with ID starting with 80d7c2a329fe244ae445ecc7f2a0aeb64f7c042cee3b9cfe1f209385eee7f398 not found: ID does not exist" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.458193 4979 scope.go:117] "RemoveContainer" containerID="bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69" Jan 30 23:01:04 crc kubenswrapper[4979]: E0130 23:01:04.458474 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69\": container with ID starting with bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69 not found: ID does not exist" containerID="bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.458507 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69"} err="failed to get container status \"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69\": rpc error: code = NotFound desc = could not find container \"bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69\": container with ID starting with bd7b64ddb520226a4edb191ca73f6db4852da2a7ca8f745a42e6e0ac7f74bb69 not found: ID does not exist" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.464211 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.464258 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") on node \"crc\" " Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.491827 4979 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.492350 4979 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb") on node "crc" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.565075 4979 reconciler_common.go:293] "Volume detached for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.634335 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.640151 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.670455 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:01:04 crc kubenswrapper[4979]: E0130 23:01:04.670925 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="setup-container" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.670949 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="setup-container" Jan 30 23:01:04 crc kubenswrapper[4979]: E0130 23:01:04.670969 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="rabbitmq" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.670977 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="rabbitmq" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.671424 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" containerName="rabbitmq" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.672502 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.675369 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.675770 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.680670 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-25ft5" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.680694 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.680762 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.690086 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.873738 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.873897 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874001 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c14c3367-d6a7-443a-9c15-913f73eac121-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874137 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874203 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874248 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c14c3367-d6a7-443a-9c15-913f73eac121-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874333 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874394 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmf4\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-kube-api-access-rcmf4\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.874443 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.971371 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.975868 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c14c3367-d6a7-443a-9c15-913f73eac121-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.975952 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.975982 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmf4\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-kube-api-access-rcmf4\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976010 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976064 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976095 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c14c3367-d6a7-443a-9c15-913f73eac121-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976155 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976179 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976849 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.976867 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.977589 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.978160 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c14c3367-d6a7-443a-9c15-913f73eac121-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.981660 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c14c3367-d6a7-443a-9c15-913f73eac121-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.982263 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.982332 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03d01eb65d8adc4d32a35137e4c958b2a45829d9b744b41c2b35ba94851c4723/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.984796 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.986168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c14c3367-d6a7-443a-9c15-913f73eac121-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:04 crc kubenswrapper[4979]: I0130 23:01:04.995513 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmf4\" (UniqueName: \"kubernetes.io/projected/c14c3367-d6a7-443a-9c15-913f73eac121-kube-api-access-rcmf4\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.019252 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54ae290b-3a4b-47fc-ad07-d59c436ebabb\") pod \"rabbitmq-server-0\" (UID: \"c14c3367-d6a7-443a-9c15-913f73eac121\") " pod="openstack/rabbitmq-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.065966 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-25ft5" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.074756 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078109 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078226 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078277 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078360 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078413 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6c68\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078454 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078475 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078778 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.078808 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info\") pod \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\" (UID: \"2fdc62fc-7d4d-4f2a-9611-4011f302320a\") " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.079662 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.080075 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.080114 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.081508 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.081571 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.081589 4979 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.084356 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68" (OuterVolumeSpecName: "kube-api-access-s6c68") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "kube-api-access-s6c68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.085306 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.100967 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77" (OuterVolumeSpecName: "persistence") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.102904 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c" path="/var/lib/kubelet/pods/4ffcc69b-0a6a-4fb4-9deb-fc7d6de4033c/volumes" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.106062 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info" (OuterVolumeSpecName: "pod-info") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.127812 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf" (OuterVolumeSpecName: "server-conf") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.178107 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2fdc62fc-7d4d-4f2a-9611-4011f302320a" (UID: "2fdc62fc-7d4d-4f2a-9611-4011f302320a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182841 4979 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2fdc62fc-7d4d-4f2a-9611-4011f302320a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182876 4979 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2fdc62fc-7d4d-4f2a-9611-4011f302320a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182893 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6c68\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-kube-api-access-s6c68\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182907 4979 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2fdc62fc-7d4d-4f2a-9611-4011f302320a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182937 4979 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") on node \"crc\" " Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.182950 4979 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2fdc62fc-7d4d-4f2a-9611-4011f302320a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.200187 4979 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.200495 4979 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77") on node "crc" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.284257 4979 reconciler_common.go:293] "Volume detached for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.320093 4979 generic.go:334] "Generic (PLEG): container finished" podID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerID="2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea" exitCode=0 Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.320161 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerDied","Data":"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea"} Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.320191 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2fdc62fc-7d4d-4f2a-9611-4011f302320a","Type":"ContainerDied","Data":"77bba48b078db4e2a5d4fae60fd1fb07df7ec13057417d3ebfc0bf61c7d0d3fc"} Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.320215 4979 scope.go:117] "RemoveContainer" containerID="2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.320399 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.382806 4979 scope.go:117] "RemoveContainer" containerID="fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.395829 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.430178 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.434989 4979 scope.go:117] "RemoveContainer" containerID="2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea" Jan 30 23:01:05 crc kubenswrapper[4979]: E0130 23:01:05.435440 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea\": container with ID starting with 2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea not found: ID does not exist" containerID="2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.436239 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea"} err="failed to get container status \"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea\": rpc error: code = NotFound desc = could not find container \"2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea\": container with ID starting with 2b73e9c7dd78be2381faac570971d65e296abc68ef616ac3a06d60e6a93579ea not found: ID does not exist" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.436298 4979 scope.go:117] "RemoveContainer" containerID="fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.436595 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:01:05 crc kubenswrapper[4979]: E0130 23:01:05.437443 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="rabbitmq" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.437473 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="rabbitmq" Jan 30 23:01:05 crc kubenswrapper[4979]: E0130 23:01:05.437504 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="setup-container" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.437515 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="setup-container" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.437970 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" containerName="rabbitmq" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.439185 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: E0130 23:01:05.441683 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5\": container with ID starting with fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5 not found: ID does not exist" containerID="fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.441760 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5"} err="failed to get container status \"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5\": rpc error: code = NotFound desc = could not find container \"fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5\": container with ID starting with fece93ad4c9ad4c4d7afdc1c195c7e8955eaddded95f9d3a73fd1073b537c3f5 not found: ID does not exist" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.445420 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.445747 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.445901 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.446622 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.448290 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vcf7n" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.459209 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.472402 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.526819 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.596538 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.596960 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="dnsmasq-dns" containerID="cri-o://41d3a38d06953cb053d19bb002e45d19aa343a9b5dcd77d6e2762bf010b0a059" gracePeriod=10 Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599317 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82kch\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-kube-api-access-82kch\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599386 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/291b372c-0448-4bc4-88a4-e61a412ba45a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599444 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599470 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599502 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599540 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/291b372c-0448-4bc4-88a4-e61a412ba45a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599592 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599624 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.599659 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.700845 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.700913 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82kch\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-kube-api-access-82kch\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.700939 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/291b372c-0448-4bc4-88a4-e61a412ba45a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.700998 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701023 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701064 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701092 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/291b372c-0448-4bc4-88a4-e61a412ba45a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701144 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701166 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.701989 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.702250 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.702638 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.704178 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/291b372c-0448-4bc4-88a4-e61a412ba45a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.705749 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/291b372c-0448-4bc4-88a4-e61a412ba45a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.705864 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.707380 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.707446 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6649be050b7f075ba9ae655c5497b53ee628ceded131093e643c8c774a634b05/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.708702 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/291b372c-0448-4bc4-88a4-e61a412ba45a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.724895 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82kch\" (UniqueName: \"kubernetes.io/projected/291b372c-0448-4bc4-88a4-e61a412ba45a-kube-api-access-82kch\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.747505 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7712178c-5e8e-4ca3-83b8-9a44ab58ff77\") pod \"rabbitmq-cell1-server-0\" (UID: \"291b372c-0448-4bc4-88a4-e61a412ba45a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:05 crc kubenswrapper[4979]: I0130 23:01:05.764976 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.292538 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 23:01:06 crc kubenswrapper[4979]: W0130 23:01:06.299695 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod291b372c_0448_4bc4_88a4_e61a412ba45a.slice/crio-323c500b9318f851f75cce06711cbcdf323f50d08cbaacb681ef4f666687c140 WatchSource:0}: Error finding container 323c500b9318f851f75cce06711cbcdf323f50d08cbaacb681ef4f666687c140: Status 404 returned error can't find the container with id 323c500b9318f851f75cce06711cbcdf323f50d08cbaacb681ef4f666687c140 Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.331894 4979 generic.go:334] "Generic (PLEG): container finished" podID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerID="41d3a38d06953cb053d19bb002e45d19aa343a9b5dcd77d6e2762bf010b0a059" exitCode=0 Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.332001 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" event={"ID":"1d741010-36ef-41d3-8613-ab2d49cacfb7","Type":"ContainerDied","Data":"41d3a38d06953cb053d19bb002e45d19aa343a9b5dcd77d6e2762bf010b0a059"} Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.332071 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" event={"ID":"1d741010-36ef-41d3-8613-ab2d49cacfb7","Type":"ContainerDied","Data":"994ef8ca363dd40c266610987d3ec533707b724f9ddcc04659cbb378e0bcd6ba"} Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.332085 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="994ef8ca363dd40c266610987d3ec533707b724f9ddcc04659cbb378e0bcd6ba" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.334113 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c14c3367-d6a7-443a-9c15-913f73eac121","Type":"ContainerStarted","Data":"53d6d5c3023f791b5e35b41c3c8d865e43e3aa39c78f6651102d16bf2570191a"} Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.335536 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"291b372c-0448-4bc4-88a4-e61a412ba45a","Type":"ContainerStarted","Data":"323c500b9318f851f75cce06711cbcdf323f50d08cbaacb681ef4f666687c140"} Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.592495 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.718696 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config\") pod \"1d741010-36ef-41d3-8613-ab2d49cacfb7\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.718763 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc\") pod \"1d741010-36ef-41d3-8613-ab2d49cacfb7\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.718898 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwd9z\" (UniqueName: \"kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z\") pod \"1d741010-36ef-41d3-8613-ab2d49cacfb7\" (UID: \"1d741010-36ef-41d3-8613-ab2d49cacfb7\") " Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.724741 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z" (OuterVolumeSpecName: "kube-api-access-zwd9z") pod "1d741010-36ef-41d3-8613-ab2d49cacfb7" (UID: "1d741010-36ef-41d3-8613-ab2d49cacfb7"). InnerVolumeSpecName "kube-api-access-zwd9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.751705 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config" (OuterVolumeSpecName: "config") pod "1d741010-36ef-41d3-8613-ab2d49cacfb7" (UID: "1d741010-36ef-41d3-8613-ab2d49cacfb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.752599 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d741010-36ef-41d3-8613-ab2d49cacfb7" (UID: "1d741010-36ef-41d3-8613-ab2d49cacfb7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.820889 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.820934 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d741010-36ef-41d3-8613-ab2d49cacfb7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:06 crc kubenswrapper[4979]: I0130 23:01:06.820949 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwd9z\" (UniqueName: \"kubernetes.io/projected/1d741010-36ef-41d3-8613-ab2d49cacfb7-kube-api-access-zwd9z\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:07 crc kubenswrapper[4979]: I0130 23:01:07.082195 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fdc62fc-7d4d-4f2a-9611-4011f302320a" path="/var/lib/kubelet/pods/2fdc62fc-7d4d-4f2a-9611-4011f302320a/volumes" Jan 30 23:01:07 crc kubenswrapper[4979]: I0130 23:01:07.347069 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-nfzdr" Jan 30 23:01:07 crc kubenswrapper[4979]: I0130 23:01:07.347069 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c14c3367-d6a7-443a-9c15-913f73eac121","Type":"ContainerStarted","Data":"09631796762b28657e11525e96a349f5957cec89645ef9ef43ab94b3449842f1"} Jan 30 23:01:07 crc kubenswrapper[4979]: I0130 23:01:07.399613 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 23:01:07 crc kubenswrapper[4979]: I0130 23:01:07.404738 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-nfzdr"] Jan 30 23:01:08 crc kubenswrapper[4979]: I0130 23:01:08.360454 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"291b372c-0448-4bc4-88a4-e61a412ba45a","Type":"ContainerStarted","Data":"87009f94c390821ac1ade83b4aa7515b4c96904491368c0879f4ae02975bac0c"} Jan 30 23:01:09 crc kubenswrapper[4979]: I0130 23:01:09.087022 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" path="/var/lib/kubelet/pods/1d741010-36ef-41d3-8613-ab2d49cacfb7/volumes" Jan 30 23:01:39 crc kubenswrapper[4979]: I0130 23:01:39.699423 4979 generic.go:334] "Generic (PLEG): container finished" podID="c14c3367-d6a7-443a-9c15-913f73eac121" containerID="09631796762b28657e11525e96a349f5957cec89645ef9ef43ab94b3449842f1" exitCode=0 Jan 30 23:01:39 crc kubenswrapper[4979]: I0130 23:01:39.699514 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c14c3367-d6a7-443a-9c15-913f73eac121","Type":"ContainerDied","Data":"09631796762b28657e11525e96a349f5957cec89645ef9ef43ab94b3449842f1"} Jan 30 23:01:40 crc kubenswrapper[4979]: I0130 23:01:40.719251 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c14c3367-d6a7-443a-9c15-913f73eac121","Type":"ContainerStarted","Data":"d2d486f9c0e9e83665afcf1e616fb2a34661752573d9902a71e346ff5b3430e3"} Jan 30 23:01:40 crc kubenswrapper[4979]: I0130 23:01:40.720093 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 23:01:40 crc kubenswrapper[4979]: I0130 23:01:40.722962 4979 generic.go:334] "Generic (PLEG): container finished" podID="291b372c-0448-4bc4-88a4-e61a412ba45a" containerID="87009f94c390821ac1ade83b4aa7515b4c96904491368c0879f4ae02975bac0c" exitCode=0 Jan 30 23:01:40 crc kubenswrapper[4979]: I0130 23:01:40.723015 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"291b372c-0448-4bc4-88a4-e61a412ba45a","Type":"ContainerDied","Data":"87009f94c390821ac1ade83b4aa7515b4c96904491368c0879f4ae02975bac0c"} Jan 30 23:01:40 crc kubenswrapper[4979]: I0130 23:01:40.789513 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.789481194 podStartE2EDuration="36.789481194s" podCreationTimestamp="2026-01-30 23:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:01:40.77519527 +0000 UTC m=+4896.736442363" watchObservedRunningTime="2026-01-30 23:01:40.789481194 +0000 UTC m=+4896.750728237" Jan 30 23:01:41 crc kubenswrapper[4979]: I0130 23:01:41.735499 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"291b372c-0448-4bc4-88a4-e61a412ba45a","Type":"ContainerStarted","Data":"2a0b70b652ac3173ea2a8beea0a763672dabafb86dc545ddafb6bfd55b608b46"} Jan 30 23:01:41 crc kubenswrapper[4979]: I0130 23:01:41.736537 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:01:55 crc kubenswrapper[4979]: I0130 23:01:55.081248 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 23:01:55 crc kubenswrapper[4979]: I0130 23:01:55.124330 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.124309261 podStartE2EDuration="50.124309261s" podCreationTimestamp="2026-01-30 23:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:01:41.763411464 +0000 UTC m=+4897.724658517" watchObservedRunningTime="2026-01-30 23:01:55.124309261 +0000 UTC m=+4911.085556294" Jan 30 23:01:55 crc kubenswrapper[4979]: I0130 23:01:55.769901 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.179058 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:07 crc kubenswrapper[4979]: E0130 23:02:07.180184 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="dnsmasq-dns" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.180200 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="dnsmasq-dns" Jan 30 23:02:07 crc kubenswrapper[4979]: E0130 23:02:07.180220 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="init" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.180227 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="init" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.180393 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d741010-36ef-41d3-8613-ab2d49cacfb7" containerName="dnsmasq-dns" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.180995 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.183423 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fgnfz" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.187866 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.365835 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-458cb\" (UniqueName: \"kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb\") pod \"mariadb-client\" (UID: \"fe0ac8f0-91f1-4df6-8085-199514aa8d14\") " pod="openstack/mariadb-client" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.468846 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-458cb\" (UniqueName: \"kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb\") pod \"mariadb-client\" (UID: \"fe0ac8f0-91f1-4df6-8085-199514aa8d14\") " pod="openstack/mariadb-client" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.516350 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-458cb\" (UniqueName: \"kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb\") pod \"mariadb-client\" (UID: \"fe0ac8f0-91f1-4df6-8085-199514aa8d14\") " pod="openstack/mariadb-client" Jan 30 23:02:07 crc kubenswrapper[4979]: I0130 23:02:07.807302 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:02:08 crc kubenswrapper[4979]: I0130 23:02:08.356374 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:08 crc kubenswrapper[4979]: I0130 23:02:08.984379 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe0ac8f0-91f1-4df6-8085-199514aa8d14","Type":"ContainerStarted","Data":"45da7abfe3b12754b892c78fe72f8f43e1eb3218665066ab662322a1773d55e0"} Jan 30 23:02:08 crc kubenswrapper[4979]: I0130 23:02:08.985136 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe0ac8f0-91f1-4df6-8085-199514aa8d14","Type":"ContainerStarted","Data":"fa39b2a93f043816f3a61766d193f90e8a471ec6816dd68b3f369617b01a06e6"} Jan 30 23:02:09 crc kubenswrapper[4979]: I0130 23:02:09.002328 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.002304224 podStartE2EDuration="2.002304224s" podCreationTimestamp="2026-01-30 23:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:02:08.997867375 +0000 UTC m=+4924.959114418" watchObservedRunningTime="2026-01-30 23:02:09.002304224 +0000 UTC m=+4924.963551257" Jan 30 23:02:13 crc kubenswrapper[4979]: E0130 23:02:13.552489 4979 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.143:42220->38.102.83.143:38353: write tcp 38.102.83.143:42220->38.102.83.143:38353: write: connection reset by peer Jan 30 23:02:22 crc kubenswrapper[4979]: I0130 23:02:22.969563 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:22 crc kubenswrapper[4979]: I0130 23:02:22.970476 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" containerName="mariadb-client" containerID="cri-o://45da7abfe3b12754b892c78fe72f8f43e1eb3218665066ab662322a1773d55e0" gracePeriod=30 Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.102407 4979 generic.go:334] "Generic (PLEG): container finished" podID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" containerID="45da7abfe3b12754b892c78fe72f8f43e1eb3218665066ab662322a1773d55e0" exitCode=143 Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.102456 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe0ac8f0-91f1-4df6-8085-199514aa8d14","Type":"ContainerDied","Data":"45da7abfe3b12754b892c78fe72f8f43e1eb3218665066ab662322a1773d55e0"} Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.458402 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.469541 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-458cb\" (UniqueName: \"kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb\") pod \"fe0ac8f0-91f1-4df6-8085-199514aa8d14\" (UID: \"fe0ac8f0-91f1-4df6-8085-199514aa8d14\") " Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.478260 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb" (OuterVolumeSpecName: "kube-api-access-458cb") pod "fe0ac8f0-91f1-4df6-8085-199514aa8d14" (UID: "fe0ac8f0-91f1-4df6-8085-199514aa8d14"). InnerVolumeSpecName "kube-api-access-458cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:02:23 crc kubenswrapper[4979]: I0130 23:02:23.571452 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-458cb\" (UniqueName: \"kubernetes.io/projected/fe0ac8f0-91f1-4df6-8085-199514aa8d14-kube-api-access-458cb\") on node \"crc\" DevicePath \"\"" Jan 30 23:02:24 crc kubenswrapper[4979]: I0130 23:02:24.120968 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe0ac8f0-91f1-4df6-8085-199514aa8d14","Type":"ContainerDied","Data":"fa39b2a93f043816f3a61766d193f90e8a471ec6816dd68b3f369617b01a06e6"} Jan 30 23:02:24 crc kubenswrapper[4979]: I0130 23:02:24.121016 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:02:24 crc kubenswrapper[4979]: I0130 23:02:24.121312 4979 scope.go:117] "RemoveContainer" containerID="45da7abfe3b12754b892c78fe72f8f43e1eb3218665066ab662322a1773d55e0" Jan 30 23:02:24 crc kubenswrapper[4979]: I0130 23:02:24.151343 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:24 crc kubenswrapper[4979]: I0130 23:02:24.157939 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:02:25 crc kubenswrapper[4979]: I0130 23:02:25.083894 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" path="/var/lib/kubelet/pods/fe0ac8f0-91f1-4df6-8085-199514aa8d14/volumes" Jan 30 23:02:32 crc kubenswrapper[4979]: I0130 23:02:32.039200 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:02:32 crc kubenswrapper[4979]: I0130 23:02:32.040004 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:03:02 crc kubenswrapper[4979]: I0130 23:03:02.040354 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:03:02 crc kubenswrapper[4979]: I0130 23:03:02.041122 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:03:26 crc kubenswrapper[4979]: I0130 23:03:26.211691 4979 scope.go:117] "RemoveContainer" containerID="f9b321201755262611e536dca11c7193aa5f320fa99f7da74aac970a57d934ef" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.039611 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.040395 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.040450 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.041176 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.041236 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" gracePeriod=600 Jan 30 23:03:32 crc kubenswrapper[4979]: E0130 23:03:32.167156 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.729962 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" exitCode=0 Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.730008 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6"} Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.730073 4979 scope.go:117] "RemoveContainer" containerID="3bba97c606dbe9c68f48bc5e0029f45fc1e7266ce68f26843db3d15f9ef6fef9" Jan 30 23:03:32 crc kubenswrapper[4979]: I0130 23:03:32.730631 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:03:32 crc kubenswrapper[4979]: E0130 23:03:32.730934 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:03:46 crc kubenswrapper[4979]: I0130 23:03:46.069898 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:03:46 crc kubenswrapper[4979]: E0130 23:03:46.070678 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:04:01 crc kubenswrapper[4979]: I0130 23:04:01.071454 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:04:01 crc kubenswrapper[4979]: E0130 23:04:01.072687 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:04:14 crc kubenswrapper[4979]: I0130 23:04:14.069444 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:04:14 crc kubenswrapper[4979]: E0130 23:04:14.072662 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:04:25 crc kubenswrapper[4979]: I0130 23:04:25.073706 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:04:25 crc kubenswrapper[4979]: E0130 23:04:25.077114 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:04:37 crc kubenswrapper[4979]: I0130 23:04:37.069355 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:04:37 crc kubenswrapper[4979]: E0130 23:04:37.070169 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:04:50 crc kubenswrapper[4979]: I0130 23:04:50.069561 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:04:50 crc kubenswrapper[4979]: E0130 23:04:50.070373 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:05:01 crc kubenswrapper[4979]: I0130 23:05:01.069586 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:05:01 crc kubenswrapper[4979]: E0130 23:05:01.070323 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:05:14 crc kubenswrapper[4979]: I0130 23:05:14.070454 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:05:14 crc kubenswrapper[4979]: E0130 23:05:14.071159 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:05:29 crc kubenswrapper[4979]: I0130 23:05:29.069967 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:05:29 crc kubenswrapper[4979]: E0130 23:05:29.071435 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:05:41 crc kubenswrapper[4979]: I0130 23:05:41.069496 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:05:41 crc kubenswrapper[4979]: E0130 23:05:41.070342 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:05:55 crc kubenswrapper[4979]: I0130 23:05:55.073861 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:05:55 crc kubenswrapper[4979]: E0130 23:05:55.074950 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:06 crc kubenswrapper[4979]: I0130 23:06:06.069976 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:06:06 crc kubenswrapper[4979]: E0130 23:06:06.070889 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:18 crc kubenswrapper[4979]: I0130 23:06:18.071953 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:06:18 crc kubenswrapper[4979]: E0130 23:06:18.073092 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:26 crc kubenswrapper[4979]: I0130 23:06:26.311792 4979 scope.go:117] "RemoveContainer" containerID="e92999cfafdeac8211d5158e0746bbde23f4c02a545a5abc5507e1fcf7782d7c" Jan 30 23:06:26 crc kubenswrapper[4979]: I0130 23:06:26.345328 4979 scope.go:117] "RemoveContainer" containerID="41d3a38d06953cb053d19bb002e45d19aa343a9b5dcd77d6e2762bf010b0a059" Jan 30 23:06:26 crc kubenswrapper[4979]: I0130 23:06:26.391268 4979 scope.go:117] "RemoveContainer" containerID="cf41beecee7e20e5f9c898a20d24e379b95780b90e8224537101891074538c0d" Jan 30 23:06:32 crc kubenswrapper[4979]: I0130 23:06:32.070103 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:06:32 crc kubenswrapper[4979]: E0130 23:06:32.071118 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:47 crc kubenswrapper[4979]: I0130 23:06:47.070571 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:06:47 crc kubenswrapper[4979]: E0130 23:06:47.072793 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.049742 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 23:06:58 crc kubenswrapper[4979]: E0130 23:06:58.050698 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" containerName="mariadb-client" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.050716 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" containerName="mariadb-client" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.050877 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0ac8f0-91f1-4df6-8085-199514aa8d14" containerName="mariadb-client" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.051448 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.053846 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fgnfz" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.059854 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.241718 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-474d0e72-0a0b-4960-b505-44d39376c537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-474d0e72-0a0b-4960-b505-44d39376c537\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.241865 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j2dz\" (UniqueName: \"kubernetes.io/projected/74f9350b-6f51-40b4-85a5-be1ffad9eb0c-kube-api-access-2j2dz\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.342695 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j2dz\" (UniqueName: \"kubernetes.io/projected/74f9350b-6f51-40b4-85a5-be1ffad9eb0c-kube-api-access-2j2dz\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.342788 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-474d0e72-0a0b-4960-b505-44d39376c537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-474d0e72-0a0b-4960-b505-44d39376c537\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.346235 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.346288 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-474d0e72-0a0b-4960-b505-44d39376c537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-474d0e72-0a0b-4960-b505-44d39376c537\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/632df6c27a56bcf278d2460de3056861e6548f56376a2497724cfc36261c4e22/globalmount\"" pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.363774 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j2dz\" (UniqueName: \"kubernetes.io/projected/74f9350b-6f51-40b4-85a5-be1ffad9eb0c-kube-api-access-2j2dz\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.392999 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-474d0e72-0a0b-4960-b505-44d39376c537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-474d0e72-0a0b-4960-b505-44d39376c537\") pod \"mariadb-copy-data\" (UID: \"74f9350b-6f51-40b4-85a5-be1ffad9eb0c\") " pod="openstack/mariadb-copy-data" Jan 30 23:06:58 crc kubenswrapper[4979]: I0130 23:06:58.677317 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 23:06:59 crc kubenswrapper[4979]: I0130 23:06:59.070536 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:06:59 crc kubenswrapper[4979]: E0130 23:06:59.071544 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:06:59 crc kubenswrapper[4979]: I0130 23:06:59.237596 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 23:07:00 crc kubenswrapper[4979]: I0130 23:07:00.150675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"74f9350b-6f51-40b4-85a5-be1ffad9eb0c","Type":"ContainerStarted","Data":"edacbc61ffb4c9696cdc3f0b53d6ad7feb8b1e4ae8181ed8f18e11106e85d136"} Jan 30 23:07:00 crc kubenswrapper[4979]: I0130 23:07:00.150716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"74f9350b-6f51-40b4-85a5-be1ffad9eb0c","Type":"ContainerStarted","Data":"3d00142cf672b613344d5691e7ff65b1551f927eb9a94aa238aa3d66c45fa533"} Jan 30 23:07:00 crc kubenswrapper[4979]: I0130 23:07:00.170413 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.170389668 podStartE2EDuration="3.170389668s" podCreationTimestamp="2026-01-30 23:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:00.165948188 +0000 UTC m=+5216.127195261" watchObservedRunningTime="2026-01-30 23:07:00.170389668 +0000 UTC m=+5216.131636711" Jan 30 23:07:02 crc kubenswrapper[4979]: I0130 23:07:02.958941 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:02 crc kubenswrapper[4979]: I0130 23:07:02.960265 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:02 crc kubenswrapper[4979]: I0130 23:07:02.969296 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:03 crc kubenswrapper[4979]: I0130 23:07:03.123178 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq7zp\" (UniqueName: \"kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp\") pod \"mariadb-client\" (UID: \"d5c3d722-f00d-4176-95e2-be3e349e9be4\") " pod="openstack/mariadb-client" Jan 30 23:07:03 crc kubenswrapper[4979]: I0130 23:07:03.225150 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq7zp\" (UniqueName: \"kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp\") pod \"mariadb-client\" (UID: \"d5c3d722-f00d-4176-95e2-be3e349e9be4\") " pod="openstack/mariadb-client" Jan 30 23:07:03 crc kubenswrapper[4979]: I0130 23:07:03.249433 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq7zp\" (UniqueName: \"kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp\") pod \"mariadb-client\" (UID: \"d5c3d722-f00d-4176-95e2-be3e349e9be4\") " pod="openstack/mariadb-client" Jan 30 23:07:03 crc kubenswrapper[4979]: I0130 23:07:03.285907 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:03 crc kubenswrapper[4979]: I0130 23:07:03.787379 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:04 crc kubenswrapper[4979]: I0130 23:07:04.182567 4979 generic.go:334] "Generic (PLEG): container finished" podID="d5c3d722-f00d-4176-95e2-be3e349e9be4" containerID="dad83fe6e0dd13f90e65510d87c2454c3b37aa1abc0bae6f460d76fcaed45b7c" exitCode=0 Jan 30 23:07:04 crc kubenswrapper[4979]: I0130 23:07:04.182699 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d5c3d722-f00d-4176-95e2-be3e349e9be4","Type":"ContainerDied","Data":"dad83fe6e0dd13f90e65510d87c2454c3b37aa1abc0bae6f460d76fcaed45b7c"} Jan 30 23:07:04 crc kubenswrapper[4979]: I0130 23:07:04.182885 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d5c3d722-f00d-4176-95e2-be3e349e9be4","Type":"ContainerStarted","Data":"d6e44c0eb41f535c1f9148ea313afd36baeddf318b670edacb1d512e409f16fb"} Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.567004 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.608949 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_d5c3d722-f00d-4176-95e2-be3e349e9be4/mariadb-client/0.log" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.643752 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.651431 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.667874 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq7zp\" (UniqueName: \"kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp\") pod \"d5c3d722-f00d-4176-95e2-be3e349e9be4\" (UID: \"d5c3d722-f00d-4176-95e2-be3e349e9be4\") " Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.675477 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp" (OuterVolumeSpecName: "kube-api-access-nq7zp") pod "d5c3d722-f00d-4176-95e2-be3e349e9be4" (UID: "d5c3d722-f00d-4176-95e2-be3e349e9be4"). InnerVolumeSpecName "kube-api-access-nq7zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.767497 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:05 crc kubenswrapper[4979]: E0130 23:07:05.767993 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c3d722-f00d-4176-95e2-be3e349e9be4" containerName="mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.768017 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c3d722-f00d-4176-95e2-be3e349e9be4" containerName="mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.768262 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c3d722-f00d-4176-95e2-be3e349e9be4" containerName="mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.768907 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.771186 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq7zp\" (UniqueName: \"kubernetes.io/projected/d5c3d722-f00d-4176-95e2-be3e349e9be4-kube-api-access-nq7zp\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.773395 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.873475 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vkk9\" (UniqueName: \"kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9\") pod \"mariadb-client\" (UID: \"eeb09949-6907-4277-8d3e-1b0090b437ab\") " pod="openstack/mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.975002 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vkk9\" (UniqueName: \"kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9\") pod \"mariadb-client\" (UID: \"eeb09949-6907-4277-8d3e-1b0090b437ab\") " pod="openstack/mariadb-client" Jan 30 23:07:05 crc kubenswrapper[4979]: I0130 23:07:05.995873 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vkk9\" (UniqueName: \"kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9\") pod \"mariadb-client\" (UID: \"eeb09949-6907-4277-8d3e-1b0090b437ab\") " pod="openstack/mariadb-client" Jan 30 23:07:06 crc kubenswrapper[4979]: I0130 23:07:06.091793 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:06 crc kubenswrapper[4979]: I0130 23:07:06.206181 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6e44c0eb41f535c1f9148ea313afd36baeddf318b670edacb1d512e409f16fb" Jan 30 23:07:06 crc kubenswrapper[4979]: I0130 23:07:06.206680 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:06 crc kubenswrapper[4979]: I0130 23:07:06.229782 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="d5c3d722-f00d-4176-95e2-be3e349e9be4" podUID="eeb09949-6907-4277-8d3e-1b0090b437ab" Jan 30 23:07:06 crc kubenswrapper[4979]: I0130 23:07:06.545539 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:07 crc kubenswrapper[4979]: I0130 23:07:07.090106 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c3d722-f00d-4176-95e2-be3e349e9be4" path="/var/lib/kubelet/pods/d5c3d722-f00d-4176-95e2-be3e349e9be4/volumes" Jan 30 23:07:07 crc kubenswrapper[4979]: I0130 23:07:07.215163 4979 generic.go:334] "Generic (PLEG): container finished" podID="eeb09949-6907-4277-8d3e-1b0090b437ab" containerID="1414499c1891378760db0224adf8dc6e194b9af0ea5d3cf09c3a9e612363f6b8" exitCode=0 Jan 30 23:07:07 crc kubenswrapper[4979]: I0130 23:07:07.215215 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"eeb09949-6907-4277-8d3e-1b0090b437ab","Type":"ContainerDied","Data":"1414499c1891378760db0224adf8dc6e194b9af0ea5d3cf09c3a9e612363f6b8"} Jan 30 23:07:07 crc kubenswrapper[4979]: I0130 23:07:07.215246 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"eeb09949-6907-4277-8d3e-1b0090b437ab","Type":"ContainerStarted","Data":"a2b63c5fb36cb3fc1bf6a1bc440f232da7b94e99a04d42fdb97f1b6ef9ede3d4"} Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.559156 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.580864 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_eeb09949-6907-4277-8d3e-1b0090b437ab/mariadb-client/0.log" Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.608268 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.617120 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.714766 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vkk9\" (UniqueName: \"kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9\") pod \"eeb09949-6907-4277-8d3e-1b0090b437ab\" (UID: \"eeb09949-6907-4277-8d3e-1b0090b437ab\") " Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.721917 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9" (OuterVolumeSpecName: "kube-api-access-8vkk9") pod "eeb09949-6907-4277-8d3e-1b0090b437ab" (UID: "eeb09949-6907-4277-8d3e-1b0090b437ab"). InnerVolumeSpecName "kube-api-access-8vkk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:07:08 crc kubenswrapper[4979]: I0130 23:07:08.816779 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vkk9\" (UniqueName: \"kubernetes.io/projected/eeb09949-6907-4277-8d3e-1b0090b437ab-kube-api-access-8vkk9\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:09 crc kubenswrapper[4979]: I0130 23:07:09.083651 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb09949-6907-4277-8d3e-1b0090b437ab" path="/var/lib/kubelet/pods/eeb09949-6907-4277-8d3e-1b0090b437ab/volumes" Jan 30 23:07:09 crc kubenswrapper[4979]: I0130 23:07:09.231171 4979 scope.go:117] "RemoveContainer" containerID="1414499c1891378760db0224adf8dc6e194b9af0ea5d3cf09c3a9e612363f6b8" Jan 30 23:07:09 crc kubenswrapper[4979]: I0130 23:07:09.231273 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 23:07:11 crc kubenswrapper[4979]: I0130 23:07:11.070561 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:07:11 crc kubenswrapper[4979]: E0130 23:07:11.071377 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:07:23 crc kubenswrapper[4979]: I0130 23:07:23.069832 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:07:23 crc kubenswrapper[4979]: E0130 23:07:23.071821 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:07:36 crc kubenswrapper[4979]: I0130 23:07:36.069498 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:07:36 crc kubenswrapper[4979]: E0130 23:07:36.070363 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.224149 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 23:07:41 crc kubenswrapper[4979]: E0130 23:07:41.224992 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb09949-6907-4277-8d3e-1b0090b437ab" containerName="mariadb-client" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.225006 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb09949-6907-4277-8d3e-1b0090b437ab" containerName="mariadb-client" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.225172 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb09949-6907-4277-8d3e-1b0090b437ab" containerName="mariadb-client" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.225925 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.228360 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qxjtc" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.228431 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.229098 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.245667 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.246910 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.264400 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.266431 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.276928 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.287085 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.321534 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341187 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-config\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341236 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6afaef21-c973-4ec1-ae90-f3c9b603f713-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341257 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afaef21-c973-4ec1-ae90-f3c9b603f713-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341278 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7fbe256e-5861-4bd2-b76d-a53f79b48380-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341294 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341325 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341369 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc7dr\" (UniqueName: \"kubernetes.io/projected/7fbe256e-5861-4bd2-b76d-a53f79b48380-kube-api-access-cc7dr\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341424 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-config\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341476 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvz8c\" (UniqueName: \"kubernetes.io/projected/6afaef21-c973-4ec1-ae90-f3c9b603f713-kube-api-access-vvz8c\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341512 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-config\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341534 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341568 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkhjd\" (UniqueName: \"kubernetes.io/projected/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-kube-api-access-zkhjd\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341600 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbe256e-5861-4bd2-b76d-a53f79b48380-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341644 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341668 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341697 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341734 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.341774 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443160 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc7dr\" (UniqueName: \"kubernetes.io/projected/7fbe256e-5861-4bd2-b76d-a53f79b48380-kube-api-access-cc7dr\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443262 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-config\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443294 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvz8c\" (UniqueName: \"kubernetes.io/projected/6afaef21-c973-4ec1-ae90-f3c9b603f713-kube-api-access-vvz8c\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443328 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443354 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkhjd\" (UniqueName: \"kubernetes.io/projected/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-kube-api-access-zkhjd\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443378 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-config\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443403 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbe256e-5861-4bd2-b76d-a53f79b48380-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443451 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443484 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443507 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443587 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443623 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6afaef21-c973-4ec1-ae90-f3c9b603f713-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.444253 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6afaef21-c973-4ec1-ae90-f3c9b603f713-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.444414 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-config\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.444656 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-config\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.444800 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6afaef21-c973-4ec1-ae90-f3c9b603f713-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.444982 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fbe256e-5861-4bd2-b76d-a53f79b48380-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.443652 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afaef21-c973-4ec1-ae90-f3c9b603f713-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.445668 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-config\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.446920 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-config\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.446963 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7fbe256e-5861-4bd2-b76d-a53f79b48380-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.446966 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7fbe256e-5861-4bd2-b76d-a53f79b48380-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.447078 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.447105 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.447620 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.449661 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.449697 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9c4311504aa9ddb87cb58b309caa6648fd7afe05a49693bfa2a051e7126a9c4f/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.449831 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.449861 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/918694f6df4459f5128a01366ad2648fae189e6c4cb8a5e1b5ff346c136bec2e/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.451663 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.453581 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.454455 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.456969 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/68f121d80a25a2dfded0aacfd93a0c9c0b91744b1671c55870391288bebd4413/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.457257 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afaef21-c973-4ec1-ae90-f3c9b603f713-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.461477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbe256e-5861-4bd2-b76d-a53f79b48380-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.468642 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkhjd\" (UniqueName: \"kubernetes.io/projected/977a1b80-05e8-4d3c-acbb-e9ea09b98ab0-kube-api-access-zkhjd\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.468787 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc7dr\" (UniqueName: \"kubernetes.io/projected/7fbe256e-5861-4bd2-b76d-a53f79b48380-kube-api-access-cc7dr\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.469877 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvz8c\" (UniqueName: \"kubernetes.io/projected/6afaef21-c973-4ec1-ae90-f3c9b603f713-kube-api-access-vvz8c\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.474370 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.500843 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a58079d-b563-4898-9ea2-3cb36d1ce352\") pod \"ovsdbserver-nb-2\" (UID: \"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0\") " pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.505955 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.508092 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-gws9k" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.508304 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.508957 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.509661 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.512470 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b865ac35-12ee-4430-825a-97c95fc45a5a\") pod \"ovsdbserver-nb-1\" (UID: \"7fbe256e-5861-4bd2-b76d-a53f79b48380\") " pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.524779 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.527063 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.531685 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83ae23ce-ca52-4ac4-9866-3136b2cfeb0d\") pod \"ovsdbserver-nb-0\" (UID: \"6afaef21-c973-4ec1-ae90-f3c9b603f713\") " pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.541154 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551714 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551829 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-config\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551878 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e971ad9f-b09c-4504-8caf-f6c9f0801e00-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551933 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glw4r\" (UniqueName: \"kubernetes.io/projected/e971ad9f-b09c-4504-8caf-f6c9f0801e00-kube-api-access-glw4r\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551968 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.552099 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e971ad9f-b09c-4504-8caf-f6c9f0801e00-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.551826 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.558847 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.570629 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.585917 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.622136 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.633111 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.663859 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.663944 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pcq5\" (UniqueName: \"kubernetes.io/projected/755c668a-a4c9-4a52-901d-338208af4efb-kube-api-access-7pcq5\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.663989 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-config\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664114 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-493a6fa2-2e09-4b64-b287-8207c725037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-493a6fa2-2e09-4b64-b287-8207c725037c\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664143 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664166 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e971ad9f-b09c-4504-8caf-f6c9f0801e00-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664190 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-config\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664215 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b0076344-a5b2-4fef-8a6f-28b6194b850e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664239 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0076344-a5b2-4fef-8a6f-28b6194b850e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664268 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664293 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664341 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-config\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664368 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e971ad9f-b09c-4504-8caf-f6c9f0801e00-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664391 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/755c668a-a4c9-4a52-901d-338208af4efb-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664420 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755c668a-a4c9-4a52-901d-338208af4efb-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664445 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n6t8\" (UniqueName: \"kubernetes.io/projected/b0076344-a5b2-4fef-8a6f-28b6194b850e-kube-api-access-6n6t8\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664466 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glw4r\" (UniqueName: \"kubernetes.io/projected/e971ad9f-b09c-4504-8caf-f6c9f0801e00-kube-api-access-glw4r\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.664492 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d3de03f-ac34-4942-9927-6344cc98f002\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d3de03f-ac34-4942-9927-6344cc98f002\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.666109 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e971ad9f-b09c-4504-8caf-f6c9f0801e00-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.666643 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.667949 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e971ad9f-b09c-4504-8caf-f6c9f0801e00-config\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.675716 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.675768 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5e9743c8090485d3d2afb43a58a0244989161bfb733c2eac195ad668a813c39f/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.686175 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e971ad9f-b09c-4504-8caf-f6c9f0801e00-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.696330 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glw4r\" (UniqueName: \"kubernetes.io/projected/e971ad9f-b09c-4504-8caf-f6c9f0801e00-kube-api-access-glw4r\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.709720 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c9dd8b73-1a38-4b3a-859a-f661a2b8f4c6\") pod \"ovsdbserver-sb-0\" (UID: \"e971ad9f-b09c-4504-8caf-f6c9f0801e00\") " pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768000 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-config\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768194 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-493a6fa2-2e09-4b64-b287-8207c725037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-493a6fa2-2e09-4b64-b287-8207c725037c\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768347 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768553 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-config\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768668 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b0076344-a5b2-4fef-8a6f-28b6194b850e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768705 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0076344-a5b2-4fef-8a6f-28b6194b850e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768729 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768807 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/755c668a-a4c9-4a52-901d-338208af4efb-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768837 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755c668a-a4c9-4a52-901d-338208af4efb-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768861 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n6t8\" (UniqueName: \"kubernetes.io/projected/b0076344-a5b2-4fef-8a6f-28b6194b850e-kube-api-access-6n6t8\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768899 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8d3de03f-ac34-4942-9927-6344cc98f002\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d3de03f-ac34-4942-9927-6344cc98f002\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.768936 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pcq5\" (UniqueName: \"kubernetes.io/projected/755c668a-a4c9-4a52-901d-338208af4efb-kube-api-access-7pcq5\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.770424 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-config\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.771134 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b0076344-a5b2-4fef-8a6f-28b6194b850e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.771401 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755c668a-a4c9-4a52-901d-338208af4efb-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.771513 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-config\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.771961 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0076344-a5b2-4fef-8a6f-28b6194b850e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.772481 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/755c668a-a4c9-4a52-901d-338208af4efb-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.777412 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.777457 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-493a6fa2-2e09-4b64-b287-8207c725037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-493a6fa2-2e09-4b64-b287-8207c725037c\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/51b6a8149a781e83bc033c481ee8447251e18fbab398dcdd6705f5556202058c/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.777484 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755c668a-a4c9-4a52-901d-338208af4efb-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.777665 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.777730 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8d3de03f-ac34-4942-9927-6344cc98f002\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d3de03f-ac34-4942-9927-6344cc98f002\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/876aca8e7382817e075ed153eb3238980fa4dbca8f862d2759510bfc684ac158/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.794575 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pcq5\" (UniqueName: \"kubernetes.io/projected/755c668a-a4c9-4a52-901d-338208af4efb-kube-api-access-7pcq5\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.796759 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n6t8\" (UniqueName: \"kubernetes.io/projected/b0076344-a5b2-4fef-8a6f-28b6194b850e-kube-api-access-6n6t8\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.797651 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0076344-a5b2-4fef-8a6f-28b6194b850e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.808899 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-493a6fa2-2e09-4b64-b287-8207c725037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-493a6fa2-2e09-4b64-b287-8207c725037c\") pod \"ovsdbserver-sb-2\" (UID: \"b0076344-a5b2-4fef-8a6f-28b6194b850e\") " pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:41 crc kubenswrapper[4979]: I0130 23:07:41.817238 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8d3de03f-ac34-4942-9927-6344cc98f002\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d3de03f-ac34-4942-9927-6344cc98f002\") pod \"ovsdbserver-sb-1\" (UID: \"755c668a-a4c9-4a52-901d-338208af4efb\") " pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:41.964353 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:41.994724 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.002016 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.106139 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.223611 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.507792 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6afaef21-c973-4ec1-ae90-f3c9b603f713","Type":"ContainerStarted","Data":"d0c39143de06ebe9cd1b12315fb4ecd2d7dd2c065bcc788d939246493e0bd6f9"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.508345 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6afaef21-c973-4ec1-ae90-f3c9b603f713","Type":"ContainerStarted","Data":"5c2bf53cd937d5ac0e7a882b42a4a8a3628eea12fdc0fc51eef6ec3827038da2"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.508358 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6afaef21-c973-4ec1-ae90-f3c9b603f713","Type":"ContainerStarted","Data":"2a4e15528c0dedeca1ac8f76b1da55061eb792bda36e6799859912e5b5c1933e"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.510152 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"7fbe256e-5861-4bd2-b76d-a53f79b48380","Type":"ContainerStarted","Data":"61030cb286abaa2c9f8538b45fc88d8936b1ea892fb44474226c07b6da0a542a"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.510236 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"7fbe256e-5861-4bd2-b76d-a53f79b48380","Type":"ContainerStarted","Data":"2c2f621adbee286f2e786d8f506dbf81838394c0dd92c3525de6ec27bd1e7837"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.510260 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"7fbe256e-5861-4bd2-b76d-a53f79b48380","Type":"ContainerStarted","Data":"bd2580ba7b23b4b69e922fecf9c587461f8a071ca435bdaab003995368b920f2"} Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.537556 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.537533592 podStartE2EDuration="2.537533592s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:42.531808209 +0000 UTC m=+5258.493055242" watchObservedRunningTime="2026-01-30 23:07:42.537533592 +0000 UTC m=+5258.498780635" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.557521 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=2.557501239 podStartE2EDuration="2.557501239s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:42.555772463 +0000 UTC m=+5258.517019496" watchObservedRunningTime="2026-01-30 23:07:42.557501239 +0000 UTC m=+5258.518748272" Jan 30 23:07:42 crc kubenswrapper[4979]: I0130 23:07:42.955246 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 23:07:42 crc kubenswrapper[4979]: W0130 23:07:42.960268 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod977a1b80_05e8_4d3c_acbb_e9ea09b98ab0.slice/crio-70e6b4175382333d6b9bb2920dfa2ae1ada7a159890926591c34071b3fe57686 WatchSource:0}: Error finding container 70e6b4175382333d6b9bb2920dfa2ae1ada7a159890926591c34071b3fe57686: Status 404 returned error can't find the container with id 70e6b4175382333d6b9bb2920dfa2ae1ada7a159890926591c34071b3fe57686 Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.063592 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 23:07:43 crc kubenswrapper[4979]: W0130 23:07:43.071631 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode971ad9f_b09c_4504_8caf_f6c9f0801e00.slice/crio-3df5875a745309de491b72c9f97bb99aef64d5defbc778e8c5bd7d6677452e4d WatchSource:0}: Error finding container 3df5875a745309de491b72c9f97bb99aef64d5defbc778e8c5bd7d6677452e4d: Status 404 returned error can't find the container with id 3df5875a745309de491b72c9f97bb99aef64d5defbc778e8c5bd7d6677452e4d Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.520168 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0","Type":"ContainerStarted","Data":"3223feaa98801b01329ed187a8540fdad9740917d05dcb1d5b0cb5aa4d469e6a"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.520215 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0","Type":"ContainerStarted","Data":"c80a588eae18f9547cfdc9e48db31fd672f165661f20b21bdb48d2c83a28a666"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.520226 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"977a1b80-05e8-4d3c-acbb-e9ea09b98ab0","Type":"ContainerStarted","Data":"70e6b4175382333d6b9bb2920dfa2ae1ada7a159890926591c34071b3fe57686"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.524675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e971ad9f-b09c-4504-8caf-f6c9f0801e00","Type":"ContainerStarted","Data":"260cd81d474b710574b3f8a6eb3110375102080c90db56c5b42473efb73a8ec1"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.524746 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e971ad9f-b09c-4504-8caf-f6c9f0801e00","Type":"ContainerStarted","Data":"b48e835aa0a360aa32ea8c4737c8052fef8e01380236ba4013f36b0ea86a378e"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.524764 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e971ad9f-b09c-4504-8caf-f6c9f0801e00","Type":"ContainerStarted","Data":"3df5875a745309de491b72c9f97bb99aef64d5defbc778e8c5bd7d6677452e4d"} Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.542429 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.542408795 podStartE2EDuration="3.542408795s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:43.539725893 +0000 UTC m=+5259.500972936" watchObservedRunningTime="2026-01-30 23:07:43.542408795 +0000 UTC m=+5259.503655828" Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.566677 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.566655097 podStartE2EDuration="3.566655097s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:43.558388504 +0000 UTC m=+5259.519635537" watchObservedRunningTime="2026-01-30 23:07:43.566655097 +0000 UTC m=+5259.527902130" Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.638621 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 23:07:43 crc kubenswrapper[4979]: I0130 23:07:43.987091 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.535217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"b0076344-a5b2-4fef-8a6f-28b6194b850e","Type":"ContainerStarted","Data":"d6ae2c685aae9731a910ef68b09d5270f6864e0ae9eb4a1760ea7adc17914b78"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.535646 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"b0076344-a5b2-4fef-8a6f-28b6194b850e","Type":"ContainerStarted","Data":"433f5d079d3986f10a9eccf233a36e463493a6aba9305e164e67a3aacaea7d61"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.535664 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"b0076344-a5b2-4fef-8a6f-28b6194b850e","Type":"ContainerStarted","Data":"02877d77699bf44f061f8ed0899d78a733535b092cc278a0e3accdef72ed5fd6"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.538255 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"755c668a-a4c9-4a52-901d-338208af4efb","Type":"ContainerStarted","Data":"e248a953d27976cbeb11142e924262b8c9b9e76adc6cb183594654190713a44a"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.538294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"755c668a-a4c9-4a52-901d-338208af4efb","Type":"ContainerStarted","Data":"60b35e06f342ac087469e8c2e5b961f83d746bb94b3b819c2b659c7be2dbad09"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.538304 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"755c668a-a4c9-4a52-901d-338208af4efb","Type":"ContainerStarted","Data":"a626721da3fc152a6144e6a16d0b89b334dd123ac91b67cc1ea5e5389daf2143"} Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.559397 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.562135 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.562112396 podStartE2EDuration="4.562112396s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:44.55185647 +0000 UTC m=+5260.513103543" watchObservedRunningTime="2026-01-30 23:07:44.562112396 +0000 UTC m=+5260.523359469" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.581507 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.581482447 podStartE2EDuration="4.581482447s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:44.575990729 +0000 UTC m=+5260.537237792" watchObservedRunningTime="2026-01-30 23:07:44.581482447 +0000 UTC m=+5260.542729520" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.622498 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.634223 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.964908 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:44 crc kubenswrapper[4979]: I0130 23:07:44.996099 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:45 crc kubenswrapper[4979]: I0130 23:07:45.002299 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:46 crc kubenswrapper[4979]: I0130 23:07:46.559982 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:46 crc kubenswrapper[4979]: I0130 23:07:46.622318 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:46 crc kubenswrapper[4979]: I0130 23:07:46.633866 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:46 crc kubenswrapper[4979]: I0130 23:07:46.965180 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:46 crc kubenswrapper[4979]: I0130 23:07:46.995002 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.002506 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.069927 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:07:47 crc kubenswrapper[4979]: E0130 23:07:47.070434 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.603292 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.651533 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.685059 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.687565 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.747107 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.891321 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.892696 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.896493 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.910090 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.985440 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.987067 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.987218 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd6hd\" (UniqueName: \"kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:47 crc kubenswrapper[4979]: I0130 23:07:47.987248 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.013478 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.040154 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.054790 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.071635 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.088649 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.088706 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.088767 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd6hd\" (UniqueName: \"kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.088794 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.089764 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.090416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.090604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.127567 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd6hd\" (UniqueName: \"kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd\") pod \"dnsmasq-dns-85b4d84c9c-wcmts\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.226561 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.391503 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.440981 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.442337 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.448276 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.460693 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.509674 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.511677 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.529939 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595788 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595869 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7zpd\" (UniqueName: \"kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595922 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595955 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595976 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.595990 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jndjw\" (UniqueName: \"kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.596009 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.596053 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.617737 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.697555 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.697665 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.697763 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.697868 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7zpd\" (UniqueName: \"kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.697986 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.698018 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.698105 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.698123 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jndjw\" (UniqueName: \"kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.699571 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.700920 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.701566 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.701883 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.702591 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.703429 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.720393 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7zpd\" (UniqueName: \"kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd\") pod \"dnsmasq-dns-8476b55b47-6g9tr\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.735907 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jndjw\" (UniqueName: \"kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw\") pod \"community-operators-dvltl\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.801171 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.837879 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:48 crc kubenswrapper[4979]: I0130 23:07:48.865189 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.177926 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.459989 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:07:49 crc kubenswrapper[4979]: W0130 23:07:49.471466 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf817e1e3_576c_45c4_9049_44f021907fa8.slice/crio-85cf7f7634aca4759f050ff3d6733c924e1cd84d00aeb742f3ed6ceb084ee5d5 WatchSource:0}: Error finding container 85cf7f7634aca4759f050ff3d6733c924e1cd84d00aeb742f3ed6ceb084ee5d5: Status 404 returned error can't find the container with id 85cf7f7634aca4759f050ff3d6733c924e1cd84d00aeb742f3ed6ceb084ee5d5 Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.586803 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerStarted","Data":"85cf7f7634aca4759f050ff3d6733c924e1cd84d00aeb742f3ed6ceb084ee5d5"} Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.588872 4979 generic.go:334] "Generic (PLEG): container finished" podID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerID="eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4" exitCode=0 Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.588933 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" event={"ID":"4dbc7280-e667-4d13-b0a0-eb654db2900a","Type":"ContainerDied","Data":"eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4"} Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.588952 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" event={"ID":"4dbc7280-e667-4d13-b0a0-eb654db2900a","Type":"ContainerStarted","Data":"d2099c11bd8a88809e6380887ce5ad437e53d8e142c196904be1e3882261f67b"} Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.590115 4979 generic.go:334] "Generic (PLEG): container finished" podID="f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" containerID="7cfbab04a2120345ec1c2d8a670ed2de555c3fb869f81a6829e49293943f6184" exitCode=0 Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.591231 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" event={"ID":"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc","Type":"ContainerDied","Data":"7cfbab04a2120345ec1c2d8a670ed2de555c3fb869f81a6829e49293943f6184"} Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.591264 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" event={"ID":"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc","Type":"ContainerStarted","Data":"41781b2cee80c0df7c286cbd20d0539b6f1e9ba1cd6b59a15c3498a6ad3139f2"} Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.866680 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.918743 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb\") pod \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.918874 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc\") pod \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.918896 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config\") pod \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.918953 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd6hd\" (UniqueName: \"kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd\") pod \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\" (UID: \"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc\") " Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.923345 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd" (OuterVolumeSpecName: "kube-api-access-fd6hd") pod "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" (UID: "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc"). InnerVolumeSpecName "kube-api-access-fd6hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.937998 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config" (OuterVolumeSpecName: "config") pod "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" (UID: "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.938337 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" (UID: "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:07:49 crc kubenswrapper[4979]: I0130 23:07:49.939312 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" (UID: "f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.021488 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.021524 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.021533 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.021544 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd6hd\" (UniqueName: \"kubernetes.io/projected/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc-kube-api-access-fd6hd\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.602170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" event={"ID":"f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc","Type":"ContainerDied","Data":"41781b2cee80c0df7c286cbd20d0539b6f1e9ba1cd6b59a15c3498a6ad3139f2"} Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.602231 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b4d84c9c-wcmts" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.602530 4979 scope.go:117] "RemoveContainer" containerID="7cfbab04a2120345ec1c2d8a670ed2de555c3fb869f81a6829e49293943f6184" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.605495 4979 generic.go:334] "Generic (PLEG): container finished" podID="f817e1e3-576c-45c4-9049-44f021907fa8" containerID="d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3" exitCode=0 Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.605931 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerDied","Data":"d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3"} Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.608808 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.614750 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" event={"ID":"4dbc7280-e667-4d13-b0a0-eb654db2900a","Type":"ContainerStarted","Data":"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a"} Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.615380 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.657504 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" podStartSLOduration=2.657485006 podStartE2EDuration="2.657485006s" podCreationTimestamp="2026-01-30 23:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:50.655424571 +0000 UTC m=+5266.616671604" watchObservedRunningTime="2026-01-30 23:07:50.657485006 +0000 UTC m=+5266.618732039" Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.718145 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:50 crc kubenswrapper[4979]: I0130 23:07:50.723147 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85b4d84c9c-wcmts"] Jan 30 23:07:51 crc kubenswrapper[4979]: I0130 23:07:51.089419 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" path="/var/lib/kubelet/pods/f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc/volumes" Jan 30 23:07:51 crc kubenswrapper[4979]: I0130 23:07:51.624614 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerStarted","Data":"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746"} Jan 30 23:07:51 crc kubenswrapper[4979]: I0130 23:07:51.677423 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 30 23:07:52 crc kubenswrapper[4979]: I0130 23:07:52.050105 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 30 23:07:52 crc kubenswrapper[4979]: I0130 23:07:52.638235 4979 generic.go:334] "Generic (PLEG): container finished" podID="f817e1e3-576c-45c4-9049-44f021907fa8" containerID="f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746" exitCode=0 Jan 30 23:07:52 crc kubenswrapper[4979]: I0130 23:07:52.638290 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerDied","Data":"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746"} Jan 30 23:07:53 crc kubenswrapper[4979]: I0130 23:07:53.651363 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerStarted","Data":"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4"} Jan 30 23:07:53 crc kubenswrapper[4979]: I0130 23:07:53.677534 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dvltl" podStartSLOduration=3.245274332 podStartE2EDuration="5.677512128s" podCreationTimestamp="2026-01-30 23:07:48 +0000 UTC" firstStartedPulling="2026-01-30 23:07:50.608092998 +0000 UTC m=+5266.569340071" lastFinishedPulling="2026-01-30 23:07:53.040330824 +0000 UTC m=+5269.001577867" observedRunningTime="2026-01-30 23:07:53.668741242 +0000 UTC m=+5269.629988285" watchObservedRunningTime="2026-01-30 23:07:53.677512128 +0000 UTC m=+5269.638759171" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.252591 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 30 23:07:54 crc kubenswrapper[4979]: E0130 23:07:54.252932 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" containerName="init" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.252945 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" containerName="init" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.253108 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79dee49-00b5-48ef-9a7b-9ae5dd23b0fc" containerName="init" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.253745 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.256363 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.277147 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.404163 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.404254 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ssf9\" (UniqueName: \"kubernetes.io/projected/43991b8d-f7aa-479c-9d38-e19114106e81-kube-api-access-5ssf9\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.404364 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/43991b8d-f7aa-479c-9d38-e19114106e81-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.505324 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/43991b8d-f7aa-479c-9d38-e19114106e81-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.505421 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.505449 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ssf9\" (UniqueName: \"kubernetes.io/projected/43991b8d-f7aa-479c-9d38-e19114106e81-kube-api-access-5ssf9\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.509560 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.509776 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d5214e6110063cd82083e5ae2f81858da3dca43ff685751cb7d855e8e239e21a/globalmount\"" pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.514558 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/43991b8d-f7aa-479c-9d38-e19114106e81-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.535284 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ssf9\" (UniqueName: \"kubernetes.io/projected/43991b8d-f7aa-479c-9d38-e19114106e81-kube-api-access-5ssf9\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.562948 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36f3ba99-9cd2-4318-8113-1471c2d5c177\") pod \"ovn-copy-data\" (UID: \"43991b8d-f7aa-479c-9d38-e19114106e81\") " pod="openstack/ovn-copy-data" Jan 30 23:07:54 crc kubenswrapper[4979]: I0130 23:07:54.576789 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 23:07:55 crc kubenswrapper[4979]: I0130 23:07:55.235967 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 23:07:55 crc kubenswrapper[4979]: W0130 23:07:55.257137 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43991b8d_f7aa_479c_9d38_e19114106e81.slice/crio-e3b94220417d5fd6d4f91d666daecc69a99b70cadbc40c5794c135a276c1e394 WatchSource:0}: Error finding container e3b94220417d5fd6d4f91d666daecc69a99b70cadbc40c5794c135a276c1e394: Status 404 returned error can't find the container with id e3b94220417d5fd6d4f91d666daecc69a99b70cadbc40c5794c135a276c1e394 Jan 30 23:07:55 crc kubenswrapper[4979]: I0130 23:07:55.679509 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"43991b8d-f7aa-479c-9d38-e19114106e81","Type":"ContainerStarted","Data":"fffbd7506642e3425021c1c24a8af84b0728ddffdb32adbffca07e12bb99bc2e"} Jan 30 23:07:55 crc kubenswrapper[4979]: I0130 23:07:55.679553 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"43991b8d-f7aa-479c-9d38-e19114106e81","Type":"ContainerStarted","Data":"e3b94220417d5fd6d4f91d666daecc69a99b70cadbc40c5794c135a276c1e394"} Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.070476 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:07:58 crc kubenswrapper[4979]: E0130 23:07:58.071768 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.803876 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.835779 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=5.835747358 podStartE2EDuration="5.835747358s" podCreationTimestamp="2026-01-30 23:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:07:55.694534058 +0000 UTC m=+5271.655781091" watchObservedRunningTime="2026-01-30 23:07:58.835747358 +0000 UTC m=+5274.796994411" Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.839584 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.839762 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.889951 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.890363 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="dnsmasq-dns" containerID="cri-o://57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105" gracePeriod=10 Jan 30 23:07:58 crc kubenswrapper[4979]: I0130 23:07:58.927589 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.393020 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.400246 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config\") pod \"2795bb3d-be81-4873-96f6-6f3a42857827\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.400297 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc\") pod \"2795bb3d-be81-4873-96f6-6f3a42857827\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.400485 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l5q5\" (UniqueName: \"kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5\") pod \"2795bb3d-be81-4873-96f6-6f3a42857827\" (UID: \"2795bb3d-be81-4873-96f6-6f3a42857827\") " Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.420532 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5" (OuterVolumeSpecName: "kube-api-access-9l5q5") pod "2795bb3d-be81-4873-96f6-6f3a42857827" (UID: "2795bb3d-be81-4873-96f6-6f3a42857827"). InnerVolumeSpecName "kube-api-access-9l5q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.504917 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l5q5\" (UniqueName: \"kubernetes.io/projected/2795bb3d-be81-4873-96f6-6f3a42857827-kube-api-access-9l5q5\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.533215 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config" (OuterVolumeSpecName: "config") pod "2795bb3d-be81-4873-96f6-6f3a42857827" (UID: "2795bb3d-be81-4873-96f6-6f3a42857827"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.552693 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2795bb3d-be81-4873-96f6-6f3a42857827" (UID: "2795bb3d-be81-4873-96f6-6f3a42857827"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.613400 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.613452 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2795bb3d-be81-4873-96f6-6f3a42857827-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.737736 4979 generic.go:334] "Generic (PLEG): container finished" podID="2795bb3d-be81-4873-96f6-6f3a42857827" containerID="57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105" exitCode=0 Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.738536 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" event={"ID":"2795bb3d-be81-4873-96f6-6f3a42857827","Type":"ContainerDied","Data":"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105"} Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.738468 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.739076 4979 scope.go:117] "RemoveContainer" containerID="57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.739716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lpljg" event={"ID":"2795bb3d-be81-4873-96f6-6f3a42857827","Type":"ContainerDied","Data":"89c2e0105cd91d45be0f9cf486bdd2b515115144c7c631fa1af7dbc2cbd8f36d"} Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.765934 4979 scope.go:117] "RemoveContainer" containerID="ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.797766 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.799495 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.801188 4979 scope.go:117] "RemoveContainer" containerID="57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105" Jan 30 23:07:59 crc kubenswrapper[4979]: E0130 23:07:59.802124 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105\": container with ID starting with 57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105 not found: ID does not exist" containerID="57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.802190 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105"} err="failed to get container status \"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105\": rpc error: code = NotFound desc = could not find container \"57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105\": container with ID starting with 57b890c58cab7db7464c2c3c04053dd90266df8e99bec65ba0720cf96a720105 not found: ID does not exist" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.802229 4979 scope.go:117] "RemoveContainer" containerID="ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1" Jan 30 23:07:59 crc kubenswrapper[4979]: E0130 23:07:59.802552 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1\": container with ID starting with ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1 not found: ID does not exist" containerID="ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.802582 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1"} err="failed to get container status \"ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1\": rpc error: code = NotFound desc = could not find container \"ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1\": container with ID starting with ec44d21d0935c303c9fffef14457538fd04d198bad092d3a6b3310e0523375b1 not found: ID does not exist" Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.804989 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lpljg"] Jan 30 23:07:59 crc kubenswrapper[4979]: I0130 23:07:59.854205 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.631722 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 23:08:00 crc kubenswrapper[4979]: E0130 23:08:00.632627 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="init" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.632644 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="init" Jan 30 23:08:00 crc kubenswrapper[4979]: E0130 23:08:00.632670 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="dnsmasq-dns" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.632679 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="dnsmasq-dns" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.632876 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" containerName="dnsmasq-dns" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.634115 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.645306 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.645680 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-dtdc9" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.650610 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.673929 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.737505 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-scripts\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.737847 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-config\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.737949 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89760273-d9f8-4c51-8af9-4a651cadc92c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.738182 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89760273-d9f8-4c51-8af9-4a651cadc92c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.738257 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qprwt\" (UniqueName: \"kubernetes.io/projected/89760273-d9f8-4c51-8af9-4a651cadc92c-kube-api-access-qprwt\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.839886 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-scripts\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.840099 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-config\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.840134 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89760273-d9f8-4c51-8af9-4a651cadc92c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.840182 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89760273-d9f8-4c51-8af9-4a651cadc92c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.840220 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qprwt\" (UniqueName: \"kubernetes.io/projected/89760273-d9f8-4c51-8af9-4a651cadc92c-kube-api-access-qprwt\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.841015 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89760273-d9f8-4c51-8af9-4a651cadc92c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.841312 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-scripts\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.841477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89760273-d9f8-4c51-8af9-4a651cadc92c-config\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.847901 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89760273-d9f8-4c51-8af9-4a651cadc92c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.895736 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qprwt\" (UniqueName: \"kubernetes.io/projected/89760273-d9f8-4c51-8af9-4a651cadc92c-kube-api-access-qprwt\") pod \"ovn-northd-0\" (UID: \"89760273-d9f8-4c51-8af9-4a651cadc92c\") " pod="openstack/ovn-northd-0" Jan 30 23:08:00 crc kubenswrapper[4979]: I0130 23:08:00.970886 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 23:08:01 crc kubenswrapper[4979]: I0130 23:08:01.108179 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2795bb3d-be81-4873-96f6-6f3a42857827" path="/var/lib/kubelet/pods/2795bb3d-be81-4873-96f6-6f3a42857827/volumes" Jan 30 23:08:01 crc kubenswrapper[4979]: I0130 23:08:01.519974 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 23:08:01 crc kubenswrapper[4979]: W0130 23:08:01.540426 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89760273_d9f8_4c51_8af9_4a651cadc92c.slice/crio-d60d6769b25b7f9c231929c9482294e11662999ccff73ebb2cf7b3dfd954f1f9 WatchSource:0}: Error finding container d60d6769b25b7f9c231929c9482294e11662999ccff73ebb2cf7b3dfd954f1f9: Status 404 returned error can't find the container with id d60d6769b25b7f9c231929c9482294e11662999ccff73ebb2cf7b3dfd954f1f9 Jan 30 23:08:01 crc kubenswrapper[4979]: I0130 23:08:01.759274 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dvltl" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="registry-server" containerID="cri-o://2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4" gracePeriod=2 Jan 30 23:08:01 crc kubenswrapper[4979]: I0130 23:08:01.759691 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89760273-d9f8-4c51-8af9-4a651cadc92c","Type":"ContainerStarted","Data":"1e5c472af3d3bc83452fb0c302229ac81f4af1dc7db898a66c81f256ced076aa"} Jan 30 23:08:01 crc kubenswrapper[4979]: I0130 23:08:01.759783 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89760273-d9f8-4c51-8af9-4a651cadc92c","Type":"ContainerStarted","Data":"d60d6769b25b7f9c231929c9482294e11662999ccff73ebb2cf7b3dfd954f1f9"} Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.282201 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.468175 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities\") pod \"f817e1e3-576c-45c4-9049-44f021907fa8\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.468888 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content\") pod \"f817e1e3-576c-45c4-9049-44f021907fa8\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.469803 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities" (OuterVolumeSpecName: "utilities") pod "f817e1e3-576c-45c4-9049-44f021907fa8" (UID: "f817e1e3-576c-45c4-9049-44f021907fa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.472423 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jndjw\" (UniqueName: \"kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw\") pod \"f817e1e3-576c-45c4-9049-44f021907fa8\" (UID: \"f817e1e3-576c-45c4-9049-44f021907fa8\") " Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.473569 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.481810 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw" (OuterVolumeSpecName: "kube-api-access-jndjw") pod "f817e1e3-576c-45c4-9049-44f021907fa8" (UID: "f817e1e3-576c-45c4-9049-44f021907fa8"). InnerVolumeSpecName "kube-api-access-jndjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.517748 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f817e1e3-576c-45c4-9049-44f021907fa8" (UID: "f817e1e3-576c-45c4-9049-44f021907fa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.575130 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f817e1e3-576c-45c4-9049-44f021907fa8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.575173 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jndjw\" (UniqueName: \"kubernetes.io/projected/f817e1e3-576c-45c4-9049-44f021907fa8-kube-api-access-jndjw\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.779126 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89760273-d9f8-4c51-8af9-4a651cadc92c","Type":"ContainerStarted","Data":"1b05c028b4ab78dc76c6d91db484ae9571bc66f4b5b38e59b39f4c47dc098409"} Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.779322 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.785350 4979 generic.go:334] "Generic (PLEG): container finished" podID="f817e1e3-576c-45c4-9049-44f021907fa8" containerID="2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4" exitCode=0 Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.785395 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerDied","Data":"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4"} Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.785416 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvltl" event={"ID":"f817e1e3-576c-45c4-9049-44f021907fa8","Type":"ContainerDied","Data":"85cf7f7634aca4759f050ff3d6733c924e1cd84d00aeb742f3ed6ceb084ee5d5"} Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.785437 4979 scope.go:117] "RemoveContainer" containerID="2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.785625 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvltl" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.806637 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.806593708 podStartE2EDuration="2.806593708s" podCreationTimestamp="2026-01-30 23:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:02.804716058 +0000 UTC m=+5278.765963091" watchObservedRunningTime="2026-01-30 23:08:02.806593708 +0000 UTC m=+5278.767840741" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.810480 4979 scope.go:117] "RemoveContainer" containerID="f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.842535 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.854221 4979 scope.go:117] "RemoveContainer" containerID="d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.854989 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dvltl"] Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.887964 4979 scope.go:117] "RemoveContainer" containerID="2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4" Jan 30 23:08:02 crc kubenswrapper[4979]: E0130 23:08:02.890064 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4\": container with ID starting with 2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4 not found: ID does not exist" containerID="2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.890186 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4"} err="failed to get container status \"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4\": rpc error: code = NotFound desc = could not find container \"2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4\": container with ID starting with 2eed904eb03806c76fa17cbeb03f2e9ee8be1530ff6446a8a72b4be393669fa4 not found: ID does not exist" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.890250 4979 scope.go:117] "RemoveContainer" containerID="f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746" Jan 30 23:08:02 crc kubenswrapper[4979]: E0130 23:08:02.890967 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746\": container with ID starting with f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746 not found: ID does not exist" containerID="f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.891076 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746"} err="failed to get container status \"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746\": rpc error: code = NotFound desc = could not find container \"f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746\": container with ID starting with f3261bb90577dc7db68186f2e0dccdb1f1f01263646a1eb6f7402710e15db746 not found: ID does not exist" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.891110 4979 scope.go:117] "RemoveContainer" containerID="d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3" Jan 30 23:08:02 crc kubenswrapper[4979]: E0130 23:08:02.892578 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3\": container with ID starting with d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3 not found: ID does not exist" containerID="d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3" Jan 30 23:08:02 crc kubenswrapper[4979]: I0130 23:08:02.892633 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3"} err="failed to get container status \"d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3\": rpc error: code = NotFound desc = could not find container \"d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3\": container with ID starting with d7b9a0c8ad323d64e2a92bbe368127a7eb192842efd7674c24a667b5ae626ca3 not found: ID does not exist" Jan 30 23:08:03 crc kubenswrapper[4979]: I0130 23:08:03.082347 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" path="/var/lib/kubelet/pods/f817e1e3-576c-45c4-9049-44f021907fa8/volumes" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.687480 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fcp6h"] Jan 30 23:08:05 crc kubenswrapper[4979]: E0130 23:08:05.688575 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="registry-server" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.688593 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="registry-server" Jan 30 23:08:05 crc kubenswrapper[4979]: E0130 23:08:05.688633 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="extract-utilities" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.688640 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="extract-utilities" Jan 30 23:08:05 crc kubenswrapper[4979]: E0130 23:08:05.688658 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="extract-content" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.688665 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="extract-content" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.688835 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f817e1e3-576c-45c4-9049-44f021907fa8" containerName="registry-server" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.689559 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.696328 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e97b-account-create-update-7kkdr"] Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.697263 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.699352 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.706192 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fcp6h"] Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.716682 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e97b-account-create-update-7kkdr"] Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.837450 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76rdf\" (UniqueName: \"kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.837526 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6lz6\" (UniqueName: \"kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.837636 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.837730 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.939380 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rdf\" (UniqueName: \"kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.939457 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6lz6\" (UniqueName: \"kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.939488 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.939516 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.940589 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.940704 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.977855 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6lz6\" (UniqueName: \"kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6\") pod \"keystone-db-create-fcp6h\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:05 crc kubenswrapper[4979]: I0130 23:08:05.978251 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rdf\" (UniqueName: \"kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf\") pod \"keystone-e97b-account-create-update-7kkdr\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.016493 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.037203 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.451272 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e97b-account-create-update-7kkdr"] Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.515175 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fcp6h"] Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.824399 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fcp6h" event={"ID":"244815ff-89c6-49ac-91e1-4d8f44de6066","Type":"ContainerStarted","Data":"958f1b82a7938a7c0d27709d282569c0aab4b64a07e68b1bb769e01caed93449"} Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.824874 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fcp6h" event={"ID":"244815ff-89c6-49ac-91e1-4d8f44de6066","Type":"ContainerStarted","Data":"0d25bcb42e6f27051a89d31c544da700a7cc3453f311a25ece7eec6ac94cf26c"} Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.828733 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e97b-account-create-update-7kkdr" event={"ID":"cd1984c3-c561-48d8-8e99-a596088b25b7","Type":"ContainerStarted","Data":"519cd3d78305849e3e5a18a0d4ee7c2c5e0a82f36ae21f2f29ad0865227dc983"} Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.828784 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e97b-account-create-update-7kkdr" event={"ID":"cd1984c3-c561-48d8-8e99-a596088b25b7","Type":"ContainerStarted","Data":"6515d8e233a2fa628ba23c75b309f7db000a664e3978a0df9d408d36f4c87c75"} Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.840706 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-fcp6h" podStartSLOduration=1.8406824880000001 podStartE2EDuration="1.840682488s" podCreationTimestamp="2026-01-30 23:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:06.838366486 +0000 UTC m=+5282.799613519" watchObservedRunningTime="2026-01-30 23:08:06.840682488 +0000 UTC m=+5282.801929521" Jan 30 23:08:06 crc kubenswrapper[4979]: I0130 23:08:06.862576 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-e97b-account-create-update-7kkdr" podStartSLOduration=1.862556916 podStartE2EDuration="1.862556916s" podCreationTimestamp="2026-01-30 23:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:06.855721612 +0000 UTC m=+5282.816968635" watchObservedRunningTime="2026-01-30 23:08:06.862556916 +0000 UTC m=+5282.823803949" Jan 30 23:08:07 crc kubenswrapper[4979]: I0130 23:08:07.845464 4979 generic.go:334] "Generic (PLEG): container finished" podID="cd1984c3-c561-48d8-8e99-a596088b25b7" containerID="519cd3d78305849e3e5a18a0d4ee7c2c5e0a82f36ae21f2f29ad0865227dc983" exitCode=0 Jan 30 23:08:07 crc kubenswrapper[4979]: I0130 23:08:07.845614 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e97b-account-create-update-7kkdr" event={"ID":"cd1984c3-c561-48d8-8e99-a596088b25b7","Type":"ContainerDied","Data":"519cd3d78305849e3e5a18a0d4ee7c2c5e0a82f36ae21f2f29ad0865227dc983"} Jan 30 23:08:07 crc kubenswrapper[4979]: I0130 23:08:07.851101 4979 generic.go:334] "Generic (PLEG): container finished" podID="244815ff-89c6-49ac-91e1-4d8f44de6066" containerID="958f1b82a7938a7c0d27709d282569c0aab4b64a07e68b1bb769e01caed93449" exitCode=0 Jan 30 23:08:07 crc kubenswrapper[4979]: I0130 23:08:07.851161 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fcp6h" event={"ID":"244815ff-89c6-49ac-91e1-4d8f44de6066","Type":"ContainerDied","Data":"958f1b82a7938a7c0d27709d282569c0aab4b64a07e68b1bb769e01caed93449"} Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.349858 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.369448 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.420749 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76rdf\" (UniqueName: \"kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf\") pod \"cd1984c3-c561-48d8-8e99-a596088b25b7\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.420894 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts\") pod \"cd1984c3-c561-48d8-8e99-a596088b25b7\" (UID: \"cd1984c3-c561-48d8-8e99-a596088b25b7\") " Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.422239 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd1984c3-c561-48d8-8e99-a596088b25b7" (UID: "cd1984c3-c561-48d8-8e99-a596088b25b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.429265 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf" (OuterVolumeSpecName: "kube-api-access-76rdf") pod "cd1984c3-c561-48d8-8e99-a596088b25b7" (UID: "cd1984c3-c561-48d8-8e99-a596088b25b7"). InnerVolumeSpecName "kube-api-access-76rdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.522346 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6lz6\" (UniqueName: \"kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6\") pod \"244815ff-89c6-49ac-91e1-4d8f44de6066\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.522607 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts\") pod \"244815ff-89c6-49ac-91e1-4d8f44de6066\" (UID: \"244815ff-89c6-49ac-91e1-4d8f44de6066\") " Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.523207 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76rdf\" (UniqueName: \"kubernetes.io/projected/cd1984c3-c561-48d8-8e99-a596088b25b7-kube-api-access-76rdf\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.523248 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1984c3-c561-48d8-8e99-a596088b25b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.523263 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "244815ff-89c6-49ac-91e1-4d8f44de6066" (UID: "244815ff-89c6-49ac-91e1-4d8f44de6066"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.526590 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6" (OuterVolumeSpecName: "kube-api-access-z6lz6") pod "244815ff-89c6-49ac-91e1-4d8f44de6066" (UID: "244815ff-89c6-49ac-91e1-4d8f44de6066"). InnerVolumeSpecName "kube-api-access-z6lz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.624484 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/244815ff-89c6-49ac-91e1-4d8f44de6066-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.624514 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6lz6\" (UniqueName: \"kubernetes.io/projected/244815ff-89c6-49ac-91e1-4d8f44de6066-kube-api-access-z6lz6\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.882283 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fcp6h" event={"ID":"244815ff-89c6-49ac-91e1-4d8f44de6066","Type":"ContainerDied","Data":"0d25bcb42e6f27051a89d31c544da700a7cc3453f311a25ece7eec6ac94cf26c"} Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.882325 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d25bcb42e6f27051a89d31c544da700a7cc3453f311a25ece7eec6ac94cf26c" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.882488 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fcp6h" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.892285 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e97b-account-create-update-7kkdr" event={"ID":"cd1984c3-c561-48d8-8e99-a596088b25b7","Type":"ContainerDied","Data":"6515d8e233a2fa628ba23c75b309f7db000a664e3978a0df9d408d36f4c87c75"} Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.892338 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6515d8e233a2fa628ba23c75b309f7db000a664e3978a0df9d408d36f4c87c75" Jan 30 23:08:09 crc kubenswrapper[4979]: I0130 23:08:09.892409 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e97b-account-create-update-7kkdr" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.201278 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-9lbrp"] Jan 30 23:08:11 crc kubenswrapper[4979]: E0130 23:08:11.201811 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1984c3-c561-48d8-8e99-a596088b25b7" containerName="mariadb-account-create-update" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.201830 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1984c3-c561-48d8-8e99-a596088b25b7" containerName="mariadb-account-create-update" Jan 30 23:08:11 crc kubenswrapper[4979]: E0130 23:08:11.201851 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244815ff-89c6-49ac-91e1-4d8f44de6066" containerName="mariadb-database-create" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.201859 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="244815ff-89c6-49ac-91e1-4d8f44de6066" containerName="mariadb-database-create" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.202093 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1984c3-c561-48d8-8e99-a596088b25b7" containerName="mariadb-account-create-update" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.202111 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="244815ff-89c6-49ac-91e1-4d8f44de6066" containerName="mariadb-database-create" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.202898 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.210180 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mt2t7" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.210613 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.211017 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.212084 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.228528 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9lbrp"] Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.361434 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sln8l\" (UniqueName: \"kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.361566 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.361596 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.465552 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.465769 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.465919 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sln8l\" (UniqueName: \"kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.470364 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.472795 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.485541 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sln8l\" (UniqueName: \"kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l\") pod \"keystone-db-sync-9lbrp\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:11 crc kubenswrapper[4979]: I0130 23:08:11.568123 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:12 crc kubenswrapper[4979]: I0130 23:08:12.059112 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9lbrp"] Jan 30 23:08:12 crc kubenswrapper[4979]: W0130 23:08:12.063454 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e90fa06_119c_454e_9f4e_da0b5bff99bb.slice/crio-4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7 WatchSource:0}: Error finding container 4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7: Status 404 returned error can't find the container with id 4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7 Jan 30 23:08:12 crc kubenswrapper[4979]: I0130 23:08:12.918278 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lbrp" event={"ID":"7e90fa06-119c-454e-9f4e-da0b5bff99bb","Type":"ContainerStarted","Data":"14e6d9a35e66da497f5366e01530325f2e7b1996be432a046623a1284c656b4d"} Jan 30 23:08:12 crc kubenswrapper[4979]: I0130 23:08:12.918798 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lbrp" event={"ID":"7e90fa06-119c-454e-9f4e-da0b5bff99bb","Type":"ContainerStarted","Data":"4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7"} Jan 30 23:08:12 crc kubenswrapper[4979]: I0130 23:08:12.951075 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-9lbrp" podStartSLOduration=1.9510125120000001 podStartE2EDuration="1.951012512s" podCreationTimestamp="2026-01-30 23:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:12.944312041 +0000 UTC m=+5288.905559074" watchObservedRunningTime="2026-01-30 23:08:12.951012512 +0000 UTC m=+5288.912259585" Jan 30 23:08:13 crc kubenswrapper[4979]: I0130 23:08:13.070404 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:08:13 crc kubenswrapper[4979]: E0130 23:08:13.070777 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:08:13 crc kubenswrapper[4979]: I0130 23:08:13.932819 4979 generic.go:334] "Generic (PLEG): container finished" podID="7e90fa06-119c-454e-9f4e-da0b5bff99bb" containerID="14e6d9a35e66da497f5366e01530325f2e7b1996be432a046623a1284c656b4d" exitCode=0 Jan 30 23:08:13 crc kubenswrapper[4979]: I0130 23:08:13.932865 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lbrp" event={"ID":"7e90fa06-119c-454e-9f4e-da0b5bff99bb","Type":"ContainerDied","Data":"14e6d9a35e66da497f5366e01530325f2e7b1996be432a046623a1284c656b4d"} Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.323975 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.438702 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle\") pod \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.439090 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sln8l\" (UniqueName: \"kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l\") pod \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.439281 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data\") pod \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\" (UID: \"7e90fa06-119c-454e-9f4e-da0b5bff99bb\") " Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.447357 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l" (OuterVolumeSpecName: "kube-api-access-sln8l") pod "7e90fa06-119c-454e-9f4e-da0b5bff99bb" (UID: "7e90fa06-119c-454e-9f4e-da0b5bff99bb"). InnerVolumeSpecName "kube-api-access-sln8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.462473 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e90fa06-119c-454e-9f4e-da0b5bff99bb" (UID: "7e90fa06-119c-454e-9f4e-da0b5bff99bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.491352 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data" (OuterVolumeSpecName: "config-data") pod "7e90fa06-119c-454e-9f4e-da0b5bff99bb" (UID: "7e90fa06-119c-454e-9f4e-da0b5bff99bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.540968 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sln8l\" (UniqueName: \"kubernetes.io/projected/7e90fa06-119c-454e-9f4e-da0b5bff99bb-kube-api-access-sln8l\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.541007 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.541019 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90fa06-119c-454e-9f4e-da0b5bff99bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.959291 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9lbrp" event={"ID":"7e90fa06-119c-454e-9f4e-da0b5bff99bb","Type":"ContainerDied","Data":"4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7"} Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.959351 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a0bad05efe923895338799bc74e33a9450bf4cd5ed9c6a00d30f2b81a31b2c7" Jan 30 23:08:15 crc kubenswrapper[4979]: I0130 23:08:15.959373 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9lbrp" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.234256 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:08:16 crc kubenswrapper[4979]: E0130 23:08:16.234814 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e90fa06-119c-454e-9f4e-da0b5bff99bb" containerName="keystone-db-sync" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.234849 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e90fa06-119c-454e-9f4e-da0b5bff99bb" containerName="keystone-db-sync" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.235165 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e90fa06-119c-454e-9f4e-da0b5bff99bb" containerName="keystone-db-sync" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.243357 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.247095 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.267392 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mdlw5"] Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.268503 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.271133 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mt2t7" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.271462 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.271590 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.271905 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.272014 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.297826 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mdlw5"] Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355302 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355387 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355418 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355441 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355462 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355484 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355522 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnd4d\" (UniqueName: \"kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355549 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355574 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kckg6\" (UniqueName: \"kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355613 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.355637 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.486977 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487497 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487531 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487553 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487578 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487621 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnd4d\" (UniqueName: \"kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487658 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487686 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kckg6\" (UniqueName: \"kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487735 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487761 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487842 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.487980 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.488702 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.488880 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.488904 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.493087 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.493143 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.494867 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.495749 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.500059 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.509848 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kckg6\" (UniqueName: \"kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6\") pod \"dnsmasq-dns-7457648489-f9xxs\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.523737 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnd4d\" (UniqueName: \"kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d\") pod \"keystone-bootstrap-mdlw5\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.562782 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:16 crc kubenswrapper[4979]: I0130 23:08:16.595652 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.055485 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:08:17 crc kubenswrapper[4979]: W0130 23:08:17.060907 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36900afe_d3cd_4b93_8d7c_3d8d3a38f4f7.slice/crio-cfb724e5a3cfea8fe7b3b514eba9b716012b887e4da9bc4289da05ab447f45c5 WatchSource:0}: Error finding container cfb724e5a3cfea8fe7b3b514eba9b716012b887e4da9bc4289da05ab447f45c5: Status 404 returned error can't find the container with id cfb724e5a3cfea8fe7b3b514eba9b716012b887e4da9bc4289da05ab447f45c5 Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.118416 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mdlw5"] Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.992217 4979 generic.go:334] "Generic (PLEG): container finished" podID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerID="681ef7059193b0717b0eb969706fa681ca26f969cea9f506cb0573eaef292ba8" exitCode=0 Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.992380 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457648489-f9xxs" event={"ID":"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7","Type":"ContainerDied","Data":"681ef7059193b0717b0eb969706fa681ca26f969cea9f506cb0573eaef292ba8"} Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.992678 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457648489-f9xxs" event={"ID":"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7","Type":"ContainerStarted","Data":"cfb724e5a3cfea8fe7b3b514eba9b716012b887e4da9bc4289da05ab447f45c5"} Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.996009 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdlw5" event={"ID":"fa9355be-183f-4e09-9ffd-50d0be690e6c","Type":"ContainerStarted","Data":"b426269bcda15bff5775ef4940ae8834e27498d1a643891649e2cb2da0fea350"} Jan 30 23:08:17 crc kubenswrapper[4979]: I0130 23:08:17.996218 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdlw5" event={"ID":"fa9355be-183f-4e09-9ffd-50d0be690e6c","Type":"ContainerStarted","Data":"1c326a0b6026f7ae1eb8390c0b32e6b2fadd8e2f8a86f71035bee50a5ca7340a"} Jan 30 23:08:18 crc kubenswrapper[4979]: I0130 23:08:18.066169 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mdlw5" podStartSLOduration=2.066144563 podStartE2EDuration="2.066144563s" podCreationTimestamp="2026-01-30 23:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:18.058821816 +0000 UTC m=+5294.020068869" watchObservedRunningTime="2026-01-30 23:08:18.066144563 +0000 UTC m=+5294.027391606" Jan 30 23:08:19 crc kubenswrapper[4979]: I0130 23:08:19.005637 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457648489-f9xxs" event={"ID":"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7","Type":"ContainerStarted","Data":"99e085a00a239b14d311fb678f2e6ff2ee78f2fddb6b6103e4849b2212235ee3"} Jan 30 23:08:19 crc kubenswrapper[4979]: I0130 23:08:19.036574 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7457648489-f9xxs" podStartSLOduration=3.036539798 podStartE2EDuration="3.036539798s" podCreationTimestamp="2026-01-30 23:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:19.026262912 +0000 UTC m=+5294.987509965" watchObservedRunningTime="2026-01-30 23:08:19.036539798 +0000 UTC m=+5294.997786841" Jan 30 23:08:20 crc kubenswrapper[4979]: I0130 23:08:20.013289 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:21 crc kubenswrapper[4979]: I0130 23:08:21.030180 4979 generic.go:334] "Generic (PLEG): container finished" podID="fa9355be-183f-4e09-9ffd-50d0be690e6c" containerID="b426269bcda15bff5775ef4940ae8834e27498d1a643891649e2cb2da0fea350" exitCode=0 Jan 30 23:08:21 crc kubenswrapper[4979]: I0130 23:08:21.031364 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdlw5" event={"ID":"fa9355be-183f-4e09-9ffd-50d0be690e6c","Type":"ContainerDied","Data":"b426269bcda15bff5775ef4940ae8834e27498d1a643891649e2cb2da0fea350"} Jan 30 23:08:21 crc kubenswrapper[4979]: I0130 23:08:21.082674 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.393132 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.496849 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.496970 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.497110 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.497192 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.497231 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.497338 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnd4d\" (UniqueName: \"kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d\") pod \"fa9355be-183f-4e09-9ffd-50d0be690e6c\" (UID: \"fa9355be-183f-4e09-9ffd-50d0be690e6c\") " Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.505570 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.506137 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts" (OuterVolumeSpecName: "scripts") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.506384 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.509314 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d" (OuterVolumeSpecName: "kube-api-access-lnd4d") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "kube-api-access-lnd4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.528235 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.530920 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data" (OuterVolumeSpecName: "config-data") pod "fa9355be-183f-4e09-9ffd-50d0be690e6c" (UID: "fa9355be-183f-4e09-9ffd-50d0be690e6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599531 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599565 4979 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599575 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599586 4979 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599596 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa9355be-183f-4e09-9ffd-50d0be690e6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:22 crc kubenswrapper[4979]: I0130 23:08:22.599607 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnd4d\" (UniqueName: \"kubernetes.io/projected/fa9355be-183f-4e09-9ffd-50d0be690e6c-kube-api-access-lnd4d\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.066731 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdlw5" event={"ID":"fa9355be-183f-4e09-9ffd-50d0be690e6c","Type":"ContainerDied","Data":"1c326a0b6026f7ae1eb8390c0b32e6b2fadd8e2f8a86f71035bee50a5ca7340a"} Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.066799 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c326a0b6026f7ae1eb8390c0b32e6b2fadd8e2f8a86f71035bee50a5ca7340a" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.066948 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdlw5" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.146338 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mdlw5"] Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.152361 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mdlw5"] Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.237059 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-n2mf2"] Jan 30 23:08:23 crc kubenswrapper[4979]: E0130 23:08:23.237461 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9355be-183f-4e09-9ffd-50d0be690e6c" containerName="keystone-bootstrap" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.237482 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9355be-183f-4e09-9ffd-50d0be690e6c" containerName="keystone-bootstrap" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.237692 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9355be-183f-4e09-9ffd-50d0be690e6c" containerName="keystone-bootstrap" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.238406 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.241240 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.241312 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.241573 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.242605 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.246156 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mt2t7" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.251374 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n2mf2"] Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.413481 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.413592 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.413768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.413878 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.413928 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptjk\" (UniqueName: \"kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.414077 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.515850 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.515917 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.515942 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tptjk\" (UniqueName: \"kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.515967 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.515995 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.516094 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.523282 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.526858 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.527084 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.527024 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.527391 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.537378 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tptjk\" (UniqueName: \"kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk\") pod \"keystone-bootstrap-n2mf2\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.567706 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:23 crc kubenswrapper[4979]: I0130 23:08:23.808453 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n2mf2"] Jan 30 23:08:23 crc kubenswrapper[4979]: W0130 23:08:23.812669 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d5aa2c0_69c0_486f_8bf7_0f7539935f2e.slice/crio-567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c WatchSource:0}: Error finding container 567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c: Status 404 returned error can't find the container with id 567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c Jan 30 23:08:24 crc kubenswrapper[4979]: I0130 23:08:24.076168 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2mf2" event={"ID":"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e","Type":"ContainerStarted","Data":"146a28aa66c76f36d0bb7b5d10b9ff7158b1cd544c809454096339f1b214adf4"} Jan 30 23:08:24 crc kubenswrapper[4979]: I0130 23:08:24.076217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2mf2" event={"ID":"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e","Type":"ContainerStarted","Data":"567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c"} Jan 30 23:08:24 crc kubenswrapper[4979]: I0130 23:08:24.095057 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-n2mf2" podStartSLOduration=1.095019185 podStartE2EDuration="1.095019185s" podCreationTimestamp="2026-01-30 23:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:24.091140991 +0000 UTC m=+5300.052388024" watchObservedRunningTime="2026-01-30 23:08:24.095019185 +0000 UTC m=+5300.056266218" Jan 30 23:08:25 crc kubenswrapper[4979]: I0130 23:08:25.086916 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa9355be-183f-4e09-9ffd-50d0be690e6c" path="/var/lib/kubelet/pods/fa9355be-183f-4e09-9ffd-50d0be690e6c/volumes" Jan 30 23:08:26 crc kubenswrapper[4979]: I0130 23:08:26.564414 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:08:26 crc kubenswrapper[4979]: I0130 23:08:26.641844 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:08:26 crc kubenswrapper[4979]: I0130 23:08:26.642207 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="dnsmasq-dns" containerID="cri-o://00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a" gracePeriod=10 Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.070688 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:08:27 crc kubenswrapper[4979]: E0130 23:08:27.071611 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.087218 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.115373 4979 generic.go:334] "Generic (PLEG): container finished" podID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerID="00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a" exitCode=0 Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.115441 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" event={"ID":"4dbc7280-e667-4d13-b0a0-eb654db2900a","Type":"ContainerDied","Data":"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a"} Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.115488 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" event={"ID":"4dbc7280-e667-4d13-b0a0-eb654db2900a","Type":"ContainerDied","Data":"d2099c11bd8a88809e6380887ce5ad437e53d8e142c196904be1e3882261f67b"} Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.115493 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476b55b47-6g9tr" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.115507 4979 scope.go:117] "RemoveContainer" containerID="00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.121496 4979 generic.go:334] "Generic (PLEG): container finished" podID="2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" containerID="146a28aa66c76f36d0bb7b5d10b9ff7158b1cd544c809454096339f1b214adf4" exitCode=0 Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.121541 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2mf2" event={"ID":"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e","Type":"ContainerDied","Data":"146a28aa66c76f36d0bb7b5d10b9ff7158b1cd544c809454096339f1b214adf4"} Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.143114 4979 scope.go:117] "RemoveContainer" containerID="eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.166329 4979 scope.go:117] "RemoveContainer" containerID="00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a" Jan 30 23:08:27 crc kubenswrapper[4979]: E0130 23:08:27.166934 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a\": container with ID starting with 00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a not found: ID does not exist" containerID="00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.166999 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a"} err="failed to get container status \"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a\": rpc error: code = NotFound desc = could not find container \"00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a\": container with ID starting with 00a90ab1adf439db7b2e2df1839d7ce130437aa6677ce1e824d0cbeabec15d5a not found: ID does not exist" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.167124 4979 scope.go:117] "RemoveContainer" containerID="eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4" Jan 30 23:08:27 crc kubenswrapper[4979]: E0130 23:08:27.167892 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4\": container with ID starting with eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4 not found: ID does not exist" containerID="eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.167931 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4"} err="failed to get container status \"eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4\": rpc error: code = NotFound desc = could not find container \"eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4\": container with ID starting with eeab8d2a60db7894f22ace75d91a333ece8f0cce76f5580aa44f3a098920e8b4 not found: ID does not exist" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.182556 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc\") pod \"4dbc7280-e667-4d13-b0a0-eb654db2900a\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.182695 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb\") pod \"4dbc7280-e667-4d13-b0a0-eb654db2900a\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.182738 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb\") pod \"4dbc7280-e667-4d13-b0a0-eb654db2900a\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.182762 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config\") pod \"4dbc7280-e667-4d13-b0a0-eb654db2900a\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.182801 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7zpd\" (UniqueName: \"kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd\") pod \"4dbc7280-e667-4d13-b0a0-eb654db2900a\" (UID: \"4dbc7280-e667-4d13-b0a0-eb654db2900a\") " Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.189874 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd" (OuterVolumeSpecName: "kube-api-access-b7zpd") pod "4dbc7280-e667-4d13-b0a0-eb654db2900a" (UID: "4dbc7280-e667-4d13-b0a0-eb654db2900a"). InnerVolumeSpecName "kube-api-access-b7zpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.223124 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4dbc7280-e667-4d13-b0a0-eb654db2900a" (UID: "4dbc7280-e667-4d13-b0a0-eb654db2900a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.223773 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config" (OuterVolumeSpecName: "config") pod "4dbc7280-e667-4d13-b0a0-eb654db2900a" (UID: "4dbc7280-e667-4d13-b0a0-eb654db2900a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.227458 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4dbc7280-e667-4d13-b0a0-eb654db2900a" (UID: "4dbc7280-e667-4d13-b0a0-eb654db2900a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.231567 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4dbc7280-e667-4d13-b0a0-eb654db2900a" (UID: "4dbc7280-e667-4d13-b0a0-eb654db2900a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.284842 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.284880 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.284892 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.284906 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7zpd\" (UniqueName: \"kubernetes.io/projected/4dbc7280-e667-4d13-b0a0-eb654db2900a-kube-api-access-b7zpd\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.284916 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dbc7280-e667-4d13-b0a0-eb654db2900a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.461191 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:08:27 crc kubenswrapper[4979]: I0130 23:08:27.467533 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8476b55b47-6g9tr"] Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.442756 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612305 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tptjk\" (UniqueName: \"kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612364 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612499 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612531 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612551 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.612572 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle\") pod \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\" (UID: \"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e\") " Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.619174 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.619733 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk" (OuterVolumeSpecName: "kube-api-access-tptjk") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "kube-api-access-tptjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.620571 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.620882 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts" (OuterVolumeSpecName: "scripts") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.637382 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.638742 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data" (OuterVolumeSpecName: "config-data") pod "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" (UID: "2d5aa2c0-69c0-486f-8bf7-0f7539935f2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714239 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tptjk\" (UniqueName: \"kubernetes.io/projected/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-kube-api-access-tptjk\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714270 4979 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714279 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714289 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714297 4979 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:28 crc kubenswrapper[4979]: I0130 23:08:28.714306 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.082492 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" path="/var/lib/kubelet/pods/4dbc7280-e667-4d13-b0a0-eb654db2900a/volumes" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.147230 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2mf2" event={"ID":"2d5aa2c0-69c0-486f-8bf7-0f7539935f2e","Type":"ContainerDied","Data":"567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c"} Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.147275 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="567d0ed8d668d5308226d1748d3a631cbcabf3037632f3519259e37a3993ed4c" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.147340 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2mf2" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230003 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5b988cf8cf-m4gbb"] Jan 30 23:08:29 crc kubenswrapper[4979]: E0130 23:08:29.230405 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="init" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230426 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="init" Jan 30 23:08:29 crc kubenswrapper[4979]: E0130 23:08:29.230439 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="dnsmasq-dns" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230446 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="dnsmasq-dns" Jan 30 23:08:29 crc kubenswrapper[4979]: E0130 23:08:29.230464 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" containerName="keystone-bootstrap" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230472 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" containerName="keystone-bootstrap" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230654 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" containerName="keystone-bootstrap" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.230680 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbc7280-e667-4d13-b0a0-eb654db2900a" containerName="dnsmasq-dns" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.231311 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.234338 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.236816 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.238178 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.249241 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mt2t7" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.255427 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b988cf8cf-m4gbb"] Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322383 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-fernet-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322440 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-scripts\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322627 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-credential-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322678 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-combined-ca-bundle\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322722 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkdjf\" (UniqueName: \"kubernetes.io/projected/564a9679-372a-47bb-be3d-70b37a775724-kube-api-access-zkdjf\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.322751 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-config-data\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425151 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-credential-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425233 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-combined-ca-bundle\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425287 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkdjf\" (UniqueName: \"kubernetes.io/projected/564a9679-372a-47bb-be3d-70b37a775724-kube-api-access-zkdjf\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425325 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-config-data\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-fernet-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.425582 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-scripts\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.431970 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-fernet-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.432653 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-scripts\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.435906 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-credential-keys\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.440916 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-combined-ca-bundle\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.444693 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a9679-372a-47bb-be3d-70b37a775724-config-data\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.446369 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkdjf\" (UniqueName: \"kubernetes.io/projected/564a9679-372a-47bb-be3d-70b37a775724-kube-api-access-zkdjf\") pod \"keystone-5b988cf8cf-m4gbb\" (UID: \"564a9679-372a-47bb-be3d-70b37a775724\") " pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:29 crc kubenswrapper[4979]: I0130 23:08:29.562478 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:30 crc kubenswrapper[4979]: I0130 23:08:30.059209 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b988cf8cf-m4gbb"] Jan 30 23:08:30 crc kubenswrapper[4979]: I0130 23:08:30.160991 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b988cf8cf-m4gbb" event={"ID":"564a9679-372a-47bb-be3d-70b37a775724","Type":"ContainerStarted","Data":"91c8eaf2402a26626f1c2f111bfae28e4f1d7961f5e02f60f0dd36c3bd52cbb9"} Jan 30 23:08:31 crc kubenswrapper[4979]: I0130 23:08:31.170699 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b988cf8cf-m4gbb" event={"ID":"564a9679-372a-47bb-be3d-70b37a775724","Type":"ContainerStarted","Data":"82ee1bef91f2dd8cb7a728d8f6ae1c5fa842daac7ddcc8e00624e33af28702c2"} Jan 30 23:08:31 crc kubenswrapper[4979]: I0130 23:08:31.171226 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:08:39 crc kubenswrapper[4979]: I0130 23:08:39.070202 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:08:40 crc kubenswrapper[4979]: I0130 23:08:40.252772 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19"} Jan 30 23:08:40 crc kubenswrapper[4979]: I0130 23:08:40.275231 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5b988cf8cf-m4gbb" podStartSLOduration=11.275216486 podStartE2EDuration="11.275216486s" podCreationTimestamp="2026-01-30 23:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:08:31.195071433 +0000 UTC m=+5307.156318486" watchObservedRunningTime="2026-01-30 23:08:40.275216486 +0000 UTC m=+5316.236463519" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.381105 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.384228 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.420634 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.543051 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqmlv\" (UniqueName: \"kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.543128 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.543389 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.644682 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqmlv\" (UniqueName: \"kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.644775 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.644836 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.645673 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.646548 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.670065 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqmlv\" (UniqueName: \"kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv\") pod \"certified-operators-9wxqb\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:08:59 crc kubenswrapper[4979]: I0130 23:08:59.705384 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:00 crc kubenswrapper[4979]: I0130 23:09:00.212778 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:09:00 crc kubenswrapper[4979]: I0130 23:09:00.447615 4979 generic.go:334] "Generic (PLEG): container finished" podID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerID="0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51" exitCode=0 Jan 30 23:09:00 crc kubenswrapper[4979]: I0130 23:09:00.447845 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerDied","Data":"0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51"} Jan 30 23:09:00 crc kubenswrapper[4979]: I0130 23:09:00.447999 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerStarted","Data":"03bb9ff3f2f98999776322287e8c0747d6142a3c313efb59c7290e9380d4d5a5"} Jan 30 23:09:01 crc kubenswrapper[4979]: I0130 23:09:01.166340 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5b988cf8cf-m4gbb" Jan 30 23:09:02 crc kubenswrapper[4979]: I0130 23:09:02.467314 4979 generic.go:334] "Generic (PLEG): container finished" podID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerID="a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0" exitCode=0 Jan 30 23:09:02 crc kubenswrapper[4979]: I0130 23:09:02.467426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerDied","Data":"a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0"} Jan 30 23:09:03 crc kubenswrapper[4979]: I0130 23:09:03.476477 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerStarted","Data":"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea"} Jan 30 23:09:03 crc kubenswrapper[4979]: I0130 23:09:03.498826 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9wxqb" podStartSLOduration=2.054148251 podStartE2EDuration="4.498804731s" podCreationTimestamp="2026-01-30 23:08:59 +0000 UTC" firstStartedPulling="2026-01-30 23:09:00.44940348 +0000 UTC m=+5336.410650513" lastFinishedPulling="2026-01-30 23:09:02.89405995 +0000 UTC m=+5338.855306993" observedRunningTime="2026-01-30 23:09:03.497161337 +0000 UTC m=+5339.458408390" watchObservedRunningTime="2026-01-30 23:09:03.498804731 +0000 UTC m=+5339.460051774" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.644333 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.646318 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.651728 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.652150 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.652175 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-nx46z" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.662480 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.761989 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.762164 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k49pn\" (UniqueName: \"kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.762284 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.863541 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.863714 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.863767 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k49pn\" (UniqueName: \"kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.865909 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.870416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.885774 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k49pn\" (UniqueName: \"kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn\") pod \"openstackclient\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " pod="openstack/openstackclient" Jan 30 23:09:05 crc kubenswrapper[4979]: I0130 23:09:05.992575 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:09:06 crc kubenswrapper[4979]: I0130 23:09:06.455323 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 23:09:06 crc kubenswrapper[4979]: I0130 23:09:06.500702 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e","Type":"ContainerStarted","Data":"9b35f115458eae51c09b989e0ed88002066967bab95f54ae46481d8d55d31f85"} Jan 30 23:09:07 crc kubenswrapper[4979]: I0130 23:09:07.510975 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e","Type":"ContainerStarted","Data":"9e23067542f31893bc50fa1bf6cce7ed4e9c501f08ce728f7f2d98af05d87464"} Jan 30 23:09:07 crc kubenswrapper[4979]: I0130 23:09:07.530154 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.530125488 podStartE2EDuration="2.530125488s" podCreationTimestamp="2026-01-30 23:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:09:07.529174842 +0000 UTC m=+5343.490421915" watchObservedRunningTime="2026-01-30 23:09:07.530125488 +0000 UTC m=+5343.491372551" Jan 30 23:09:09 crc kubenswrapper[4979]: I0130 23:09:09.705993 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:09 crc kubenswrapper[4979]: I0130 23:09:09.706503 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:09 crc kubenswrapper[4979]: I0130 23:09:09.791152 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:10 crc kubenswrapper[4979]: I0130 23:09:10.587677 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:10 crc kubenswrapper[4979]: I0130 23:09:10.652650 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:09:12 crc kubenswrapper[4979]: I0130 23:09:12.556967 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9wxqb" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="registry-server" containerID="cri-o://9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea" gracePeriod=2 Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.105500 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.222911 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqmlv\" (UniqueName: \"kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv\") pod \"79df5709-b60b-4860-bd40-f6a7192e3ddd\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.223118 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities\") pod \"79df5709-b60b-4860-bd40-f6a7192e3ddd\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.223228 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content\") pod \"79df5709-b60b-4860-bd40-f6a7192e3ddd\" (UID: \"79df5709-b60b-4860-bd40-f6a7192e3ddd\") " Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.224706 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities" (OuterVolumeSpecName: "utilities") pod "79df5709-b60b-4860-bd40-f6a7192e3ddd" (UID: "79df5709-b60b-4860-bd40-f6a7192e3ddd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.230210 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv" (OuterVolumeSpecName: "kube-api-access-nqmlv") pod "79df5709-b60b-4860-bd40-f6a7192e3ddd" (UID: "79df5709-b60b-4860-bd40-f6a7192e3ddd"). InnerVolumeSpecName "kube-api-access-nqmlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.268956 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79df5709-b60b-4860-bd40-f6a7192e3ddd" (UID: "79df5709-b60b-4860-bd40-f6a7192e3ddd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.325988 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.326076 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79df5709-b60b-4860-bd40-f6a7192e3ddd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.326090 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqmlv\" (UniqueName: \"kubernetes.io/projected/79df5709-b60b-4860-bd40-f6a7192e3ddd-kube-api-access-nqmlv\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.566403 4979 generic.go:334] "Generic (PLEG): container finished" podID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerID="9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea" exitCode=0 Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.566454 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerDied","Data":"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea"} Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.566467 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wxqb" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.566484 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wxqb" event={"ID":"79df5709-b60b-4860-bd40-f6a7192e3ddd","Type":"ContainerDied","Data":"03bb9ff3f2f98999776322287e8c0747d6142a3c313efb59c7290e9380d4d5a5"} Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.566504 4979 scope.go:117] "RemoveContainer" containerID="9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.597084 4979 scope.go:117] "RemoveContainer" containerID="a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.606742 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.613286 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9wxqb"] Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.628836 4979 scope.go:117] "RemoveContainer" containerID="0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.658754 4979 scope.go:117] "RemoveContainer" containerID="9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea" Jan 30 23:09:13 crc kubenswrapper[4979]: E0130 23:09:13.659841 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea\": container with ID starting with 9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea not found: ID does not exist" containerID="9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.659892 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea"} err="failed to get container status \"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea\": rpc error: code = NotFound desc = could not find container \"9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea\": container with ID starting with 9bca9fffd5b2ecc1eef992f86d5e3793b1c9e98f30317944ce90a4fde8c64dea not found: ID does not exist" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.659927 4979 scope.go:117] "RemoveContainer" containerID="a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0" Jan 30 23:09:13 crc kubenswrapper[4979]: E0130 23:09:13.660728 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0\": container with ID starting with a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0 not found: ID does not exist" containerID="a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.660760 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0"} err="failed to get container status \"a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0\": rpc error: code = NotFound desc = could not find container \"a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0\": container with ID starting with a8c733b1c37684534a8838129b5bde0eead10e4fa332b4f608e413a7ef11b6e0 not found: ID does not exist" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.660779 4979 scope.go:117] "RemoveContainer" containerID="0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51" Jan 30 23:09:13 crc kubenswrapper[4979]: E0130 23:09:13.661380 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51\": container with ID starting with 0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51 not found: ID does not exist" containerID="0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51" Jan 30 23:09:13 crc kubenswrapper[4979]: I0130 23:09:13.661444 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51"} err="failed to get container status \"0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51\": rpc error: code = NotFound desc = could not find container \"0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51\": container with ID starting with 0cb3a0c20ab402c50d212c2fed70f168ec1f810b4703857de15577a9814e3d51 not found: ID does not exist" Jan 30 23:09:15 crc kubenswrapper[4979]: I0130 23:09:15.087992 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" path="/var/lib/kubelet/pods/79df5709-b60b-4860-bd40-f6a7192e3ddd/volumes" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.063477 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:25 crc kubenswrapper[4979]: E0130 23:09:25.073484 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="extract-utilities" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.073534 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="extract-utilities" Jan 30 23:09:25 crc kubenswrapper[4979]: E0130 23:09:25.073575 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="registry-server" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.073590 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="registry-server" Jan 30 23:09:25 crc kubenswrapper[4979]: E0130 23:09:25.073607 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="extract-content" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.073627 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="extract-content" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.074553 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="79df5709-b60b-4860-bd40-f6a7192e3ddd" containerName="registry-server" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.079596 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.098847 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.249919 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl62l\" (UniqueName: \"kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.250022 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.250172 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.351563 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.351677 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.352114 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.352194 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.352352 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl62l\" (UniqueName: \"kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.378335 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl62l\" (UniqueName: \"kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l\") pod \"redhat-operators-fm7qh\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.422551 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:25 crc kubenswrapper[4979]: I0130 23:09:25.668737 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:26 crc kubenswrapper[4979]: I0130 23:09:26.681532 4979 generic.go:334] "Generic (PLEG): container finished" podID="13efd321-46d8-41f8-9424-6d43e957fe88" containerID="773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e" exitCode=0 Jan 30 23:09:26 crc kubenswrapper[4979]: I0130 23:09:26.681580 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerDied","Data":"773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e"} Jan 30 23:09:26 crc kubenswrapper[4979]: I0130 23:09:26.681855 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerStarted","Data":"e461e1ee73828224c73664045ee848b5b639e5074f228fad5cfc8f69411cf7bb"} Jan 30 23:09:27 crc kubenswrapper[4979]: I0130 23:09:27.690836 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerStarted","Data":"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed"} Jan 30 23:09:28 crc kubenswrapper[4979]: I0130 23:09:28.704487 4979 generic.go:334] "Generic (PLEG): container finished" podID="13efd321-46d8-41f8-9424-6d43e957fe88" containerID="4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed" exitCode=0 Jan 30 23:09:28 crc kubenswrapper[4979]: I0130 23:09:28.704585 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerDied","Data":"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed"} Jan 30 23:09:29 crc kubenswrapper[4979]: I0130 23:09:29.723443 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerStarted","Data":"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996"} Jan 30 23:09:29 crc kubenswrapper[4979]: I0130 23:09:29.756466 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fm7qh" podStartSLOduration=3.291667935 podStartE2EDuration="5.756442785s" podCreationTimestamp="2026-01-30 23:09:24 +0000 UTC" firstStartedPulling="2026-01-30 23:09:26.682634497 +0000 UTC m=+5362.643881530" lastFinishedPulling="2026-01-30 23:09:29.147409337 +0000 UTC m=+5365.108656380" observedRunningTime="2026-01-30 23:09:29.750578107 +0000 UTC m=+5365.711825140" watchObservedRunningTime="2026-01-30 23:09:29.756442785 +0000 UTC m=+5365.717689808" Jan 30 23:09:35 crc kubenswrapper[4979]: I0130 23:09:35.423372 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:35 crc kubenswrapper[4979]: I0130 23:09:35.424454 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:36 crc kubenswrapper[4979]: I0130 23:09:36.473425 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fm7qh" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="registry-server" probeResult="failure" output=< Jan 30 23:09:36 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 23:09:36 crc kubenswrapper[4979]: > Jan 30 23:09:45 crc kubenswrapper[4979]: I0130 23:09:45.469649 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:45 crc kubenswrapper[4979]: I0130 23:09:45.540626 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:45 crc kubenswrapper[4979]: I0130 23:09:45.712431 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:46 crc kubenswrapper[4979]: I0130 23:09:46.877332 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fm7qh" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="registry-server" containerID="cri-o://ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996" gracePeriod=2 Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.353742 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.458522 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content\") pod \"13efd321-46d8-41f8-9424-6d43e957fe88\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.458690 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl62l\" (UniqueName: \"kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l\") pod \"13efd321-46d8-41f8-9424-6d43e957fe88\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.458748 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities\") pod \"13efd321-46d8-41f8-9424-6d43e957fe88\" (UID: \"13efd321-46d8-41f8-9424-6d43e957fe88\") " Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.465247 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l" (OuterVolumeSpecName: "kube-api-access-fl62l") pod "13efd321-46d8-41f8-9424-6d43e957fe88" (UID: "13efd321-46d8-41f8-9424-6d43e957fe88"). InnerVolumeSpecName "kube-api-access-fl62l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.494827 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities" (OuterVolumeSpecName: "utilities") pod "13efd321-46d8-41f8-9424-6d43e957fe88" (UID: "13efd321-46d8-41f8-9424-6d43e957fe88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.561597 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl62l\" (UniqueName: \"kubernetes.io/projected/13efd321-46d8-41f8-9424-6d43e957fe88-kube-api-access-fl62l\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.561656 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.647232 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13efd321-46d8-41f8-9424-6d43e957fe88" (UID: "13efd321-46d8-41f8-9424-6d43e957fe88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.662615 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13efd321-46d8-41f8-9424-6d43e957fe88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.887615 4979 generic.go:334] "Generic (PLEG): container finished" podID="13efd321-46d8-41f8-9424-6d43e957fe88" containerID="ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996" exitCode=0 Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.887690 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerDied","Data":"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996"} Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.887702 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm7qh" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.887836 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm7qh" event={"ID":"13efd321-46d8-41f8-9424-6d43e957fe88","Type":"ContainerDied","Data":"e461e1ee73828224c73664045ee848b5b639e5074f228fad5cfc8f69411cf7bb"} Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.887871 4979 scope.go:117] "RemoveContainer" containerID="ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.922882 4979 scope.go:117] "RemoveContainer" containerID="4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed" Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.934525 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.938021 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fm7qh"] Jan 30 23:09:47 crc kubenswrapper[4979]: I0130 23:09:47.954272 4979 scope.go:117] "RemoveContainer" containerID="773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.004971 4979 scope.go:117] "RemoveContainer" containerID="ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996" Jan 30 23:09:48 crc kubenswrapper[4979]: E0130 23:09:48.005339 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996\": container with ID starting with ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996 not found: ID does not exist" containerID="ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.005402 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996"} err="failed to get container status \"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996\": rpc error: code = NotFound desc = could not find container \"ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996\": container with ID starting with ca39137328a4f2bc410c2898bb1b95ce8f3963ffdbcb3370a21c825720622996 not found: ID does not exist" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.005429 4979 scope.go:117] "RemoveContainer" containerID="4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed" Jan 30 23:09:48 crc kubenswrapper[4979]: E0130 23:09:48.005760 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed\": container with ID starting with 4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed not found: ID does not exist" containerID="4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.005826 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed"} err="failed to get container status \"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed\": rpc error: code = NotFound desc = could not find container \"4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed\": container with ID starting with 4e2559430512be378eb34b0af928ebd2e297217936af1ac7658573f89d8c07ed not found: ID does not exist" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.005880 4979 scope.go:117] "RemoveContainer" containerID="773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e" Jan 30 23:09:48 crc kubenswrapper[4979]: E0130 23:09:48.006232 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e\": container with ID starting with 773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e not found: ID does not exist" containerID="773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e" Jan 30 23:09:48 crc kubenswrapper[4979]: I0130 23:09:48.006290 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e"} err="failed to get container status \"773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e\": rpc error: code = NotFound desc = could not find container \"773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e\": container with ID starting with 773019ede59be907332a08e8b177be5e1515c198a6fd3f1ca73f7e3c0d22966e not found: ID does not exist" Jan 30 23:09:49 crc kubenswrapper[4979]: I0130 23:09:49.080096 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" path="/var/lib/kubelet/pods/13efd321-46d8-41f8-9424-6d43e957fe88/volumes" Jan 30 23:10:37 crc kubenswrapper[4979]: I0130 23:10:37.087534 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ntfjw"] Jan 30 23:10:37 crc kubenswrapper[4979]: I0130 23:10:37.088169 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ntfjw"] Jan 30 23:10:39 crc kubenswrapper[4979]: I0130 23:10:39.086767 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579619ae-df83-40ff-8580-331060c16faf" path="/var/lib/kubelet/pods/579619ae-df83-40ff-8580-331060c16faf/volumes" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.415319 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mdk2v"] Jan 30 23:10:43 crc kubenswrapper[4979]: E0130 23:10:43.416065 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="extract-content" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.416083 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="extract-content" Jan 30 23:10:43 crc kubenswrapper[4979]: E0130 23:10:43.416094 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="registry-server" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.416101 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="registry-server" Jan 30 23:10:43 crc kubenswrapper[4979]: E0130 23:10:43.416114 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="extract-utilities" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.416124 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="extract-utilities" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.416314 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="13efd321-46d8-41f8-9424-6d43e957fe88" containerName="registry-server" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.416970 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.425081 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-088a-account-create-update-gl7pk"] Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.426324 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.428600 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.436115 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mdk2v"] Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.442399 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-088a-account-create-update-gl7pk"] Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.543932 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzzz\" (UniqueName: \"kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.544050 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.544074 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.544312 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkprc\" (UniqueName: \"kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.645770 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkprc\" (UniqueName: \"kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.646203 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhzzz\" (UniqueName: \"kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.646263 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.646289 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.647127 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.647208 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.667061 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhzzz\" (UniqueName: \"kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz\") pod \"barbican-db-create-mdk2v\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.669174 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkprc\" (UniqueName: \"kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc\") pod \"barbican-088a-account-create-update-gl7pk\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.736263 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:43 crc kubenswrapper[4979]: I0130 23:10:43.793219 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:44 crc kubenswrapper[4979]: I0130 23:10:44.213588 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mdk2v"] Jan 30 23:10:44 crc kubenswrapper[4979]: I0130 23:10:44.301558 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-088a-account-create-update-gl7pk"] Jan 30 23:10:44 crc kubenswrapper[4979]: W0130 23:10:44.303121 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5bf2d6f_952e_4cec_938b_e1d00042c3ad.slice/crio-06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d WatchSource:0}: Error finding container 06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d: Status 404 returned error can't find the container with id 06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d Jan 30 23:10:44 crc kubenswrapper[4979]: I0130 23:10:44.404096 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-088a-account-create-update-gl7pk" event={"ID":"c5bf2d6f-952e-4cec-938b-e1d00042c3ad","Type":"ContainerStarted","Data":"06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d"} Jan 30 23:10:44 crc kubenswrapper[4979]: I0130 23:10:44.405628 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mdk2v" event={"ID":"800775b4-f78f-4f2f-9d21-4dd42458db2b","Type":"ContainerStarted","Data":"d3369dffd50838ab28f0e6ede24b2ca0bfde61c3882d7e9db2200f77057e58a0"} Jan 30 23:10:45 crc kubenswrapper[4979]: I0130 23:10:45.420212 4979 generic.go:334] "Generic (PLEG): container finished" podID="800775b4-f78f-4f2f-9d21-4dd42458db2b" containerID="46d964a0839cd8efea2510cfac9bc323533200f0741e6142ba6a532c576e85b4" exitCode=0 Jan 30 23:10:45 crc kubenswrapper[4979]: I0130 23:10:45.420281 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mdk2v" event={"ID":"800775b4-f78f-4f2f-9d21-4dd42458db2b","Type":"ContainerDied","Data":"46d964a0839cd8efea2510cfac9bc323533200f0741e6142ba6a532c576e85b4"} Jan 30 23:10:45 crc kubenswrapper[4979]: I0130 23:10:45.425216 4979 generic.go:334] "Generic (PLEG): container finished" podID="c5bf2d6f-952e-4cec-938b-e1d00042c3ad" containerID="088e2e7d854d7dc05cd4dbe8fe4c7ffcbdee731d873f6f602ab10d8c9fb6c170" exitCode=0 Jan 30 23:10:45 crc kubenswrapper[4979]: I0130 23:10:45.425275 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-088a-account-create-update-gl7pk" event={"ID":"c5bf2d6f-952e-4cec-938b-e1d00042c3ad","Type":"ContainerDied","Data":"088e2e7d854d7dc05cd4dbe8fe4c7ffcbdee731d873f6f602ab10d8c9fb6c170"} Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.771797 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.789778 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.915958 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts\") pod \"800775b4-f78f-4f2f-9d21-4dd42458db2b\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.916018 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts\") pod \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.916078 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhzzz\" (UniqueName: \"kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz\") pod \"800775b4-f78f-4f2f-9d21-4dd42458db2b\" (UID: \"800775b4-f78f-4f2f-9d21-4dd42458db2b\") " Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.916243 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkprc\" (UniqueName: \"kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc\") pod \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\" (UID: \"c5bf2d6f-952e-4cec-938b-e1d00042c3ad\") " Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.917530 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "800775b4-f78f-4f2f-9d21-4dd42458db2b" (UID: "800775b4-f78f-4f2f-9d21-4dd42458db2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.917571 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5bf2d6f-952e-4cec-938b-e1d00042c3ad" (UID: "c5bf2d6f-952e-4cec-938b-e1d00042c3ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.923018 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz" (OuterVolumeSpecName: "kube-api-access-xhzzz") pod "800775b4-f78f-4f2f-9d21-4dd42458db2b" (UID: "800775b4-f78f-4f2f-9d21-4dd42458db2b"). InnerVolumeSpecName "kube-api-access-xhzzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:10:46 crc kubenswrapper[4979]: I0130 23:10:46.923344 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc" (OuterVolumeSpecName: "kube-api-access-rkprc") pod "c5bf2d6f-952e-4cec-938b-e1d00042c3ad" (UID: "c5bf2d6f-952e-4cec-938b-e1d00042c3ad"). InnerVolumeSpecName "kube-api-access-rkprc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.018152 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800775b4-f78f-4f2f-9d21-4dd42458db2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.018183 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.018196 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhzzz\" (UniqueName: \"kubernetes.io/projected/800775b4-f78f-4f2f-9d21-4dd42458db2b-kube-api-access-xhzzz\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.018209 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkprc\" (UniqueName: \"kubernetes.io/projected/c5bf2d6f-952e-4cec-938b-e1d00042c3ad-kube-api-access-rkprc\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.444352 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mdk2v" event={"ID":"800775b4-f78f-4f2f-9d21-4dd42458db2b","Type":"ContainerDied","Data":"d3369dffd50838ab28f0e6ede24b2ca0bfde61c3882d7e9db2200f77057e58a0"} Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.444410 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3369dffd50838ab28f0e6ede24b2ca0bfde61c3882d7e9db2200f77057e58a0" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.444596 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mdk2v" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.448351 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-088a-account-create-update-gl7pk" Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.448890 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-088a-account-create-update-gl7pk" event={"ID":"c5bf2d6f-952e-4cec-938b-e1d00042c3ad","Type":"ContainerDied","Data":"06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d"} Jan 30 23:10:47 crc kubenswrapper[4979]: I0130 23:10:47.449058 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e804c30bf0c1113c3cefe0759fbb6ed481aa3e22a2e87efa2a37ee5be3cb7d" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.642978 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qpzjk"] Jan 30 23:10:48 crc kubenswrapper[4979]: E0130 23:10:48.643857 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5bf2d6f-952e-4cec-938b-e1d00042c3ad" containerName="mariadb-account-create-update" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.643873 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5bf2d6f-952e-4cec-938b-e1d00042c3ad" containerName="mariadb-account-create-update" Jan 30 23:10:48 crc kubenswrapper[4979]: E0130 23:10:48.643889 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800775b4-f78f-4f2f-9d21-4dd42458db2b" containerName="mariadb-database-create" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.643895 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="800775b4-f78f-4f2f-9d21-4dd42458db2b" containerName="mariadb-database-create" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.644196 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5bf2d6f-952e-4cec-938b-e1d00042c3ad" containerName="mariadb-account-create-update" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.644229 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="800775b4-f78f-4f2f-9d21-4dd42458db2b" containerName="mariadb-database-create" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.644930 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.647357 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fpkxv" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.648859 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.653076 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qpzjk"] Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.750330 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.750428 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.750466 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmngq\" (UniqueName: \"kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.852738 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.852802 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmngq\" (UniqueName: \"kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.852922 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.861007 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.861312 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.874571 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmngq\" (UniqueName: \"kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq\") pod \"barbican-db-sync-qpzjk\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:48 crc kubenswrapper[4979]: I0130 23:10:48.973231 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:49 crc kubenswrapper[4979]: I0130 23:10:49.411160 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qpzjk"] Jan 30 23:10:49 crc kubenswrapper[4979]: I0130 23:10:49.480034 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qpzjk" event={"ID":"338244cb-adb6-4402-ba74-378f70078ebd","Type":"ContainerStarted","Data":"2e1f28ff649507ef80db11e5de7bbb5433150df8b46080c23b4af930f6e46fe6"} Jan 30 23:10:50 crc kubenswrapper[4979]: I0130 23:10:50.487499 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qpzjk" event={"ID":"338244cb-adb6-4402-ba74-378f70078ebd","Type":"ContainerStarted","Data":"6e9b936da74c87dcee37685c96b2ae5e396a4383a9a39ae0063e6c3ec2306db6"} Jan 30 23:10:50 crc kubenswrapper[4979]: I0130 23:10:50.505324 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qpzjk" podStartSLOduration=2.505308687 podStartE2EDuration="2.505308687s" podCreationTimestamp="2026-01-30 23:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:10:50.498936035 +0000 UTC m=+5446.460183068" watchObservedRunningTime="2026-01-30 23:10:50.505308687 +0000 UTC m=+5446.466555720" Jan 30 23:10:51 crc kubenswrapper[4979]: I0130 23:10:51.497397 4979 generic.go:334] "Generic (PLEG): container finished" podID="338244cb-adb6-4402-ba74-378f70078ebd" containerID="6e9b936da74c87dcee37685c96b2ae5e396a4383a9a39ae0063e6c3ec2306db6" exitCode=0 Jan 30 23:10:51 crc kubenswrapper[4979]: I0130 23:10:51.497441 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qpzjk" event={"ID":"338244cb-adb6-4402-ba74-378f70078ebd","Type":"ContainerDied","Data":"6e9b936da74c87dcee37685c96b2ae5e396a4383a9a39ae0063e6c3ec2306db6"} Jan 30 23:10:52 crc kubenswrapper[4979]: I0130 23:10:52.936214 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.042077 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmngq\" (UniqueName: \"kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq\") pod \"338244cb-adb6-4402-ba74-378f70078ebd\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.042298 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data\") pod \"338244cb-adb6-4402-ba74-378f70078ebd\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.042352 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle\") pod \"338244cb-adb6-4402-ba74-378f70078ebd\" (UID: \"338244cb-adb6-4402-ba74-378f70078ebd\") " Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.049514 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "338244cb-adb6-4402-ba74-378f70078ebd" (UID: "338244cb-adb6-4402-ba74-378f70078ebd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.049703 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq" (OuterVolumeSpecName: "kube-api-access-bmngq") pod "338244cb-adb6-4402-ba74-378f70078ebd" (UID: "338244cb-adb6-4402-ba74-378f70078ebd"). InnerVolumeSpecName "kube-api-access-bmngq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.074572 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "338244cb-adb6-4402-ba74-378f70078ebd" (UID: "338244cb-adb6-4402-ba74-378f70078ebd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.144670 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmngq\" (UniqueName: \"kubernetes.io/projected/338244cb-adb6-4402-ba74-378f70078ebd-kube-api-access-bmngq\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.144712 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.144725 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338244cb-adb6-4402-ba74-378f70078ebd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.519958 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qpzjk" event={"ID":"338244cb-adb6-4402-ba74-378f70078ebd","Type":"ContainerDied","Data":"2e1f28ff649507ef80db11e5de7bbb5433150df8b46080c23b4af930f6e46fe6"} Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.520341 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e1f28ff649507ef80db11e5de7bbb5433150df8b46080c23b4af930f6e46fe6" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.520050 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qpzjk" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.691743 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5c85d579b5-svwjh"] Jan 30 23:10:53 crc kubenswrapper[4979]: E0130 23:10:53.692128 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338244cb-adb6-4402-ba74-378f70078ebd" containerName="barbican-db-sync" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.692149 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="338244cb-adb6-4402-ba74-378f70078ebd" containerName="barbican-db-sync" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.692351 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="338244cb-adb6-4402-ba74-378f70078ebd" containerName="barbican-db-sync" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.693490 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.695417 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.697809 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fpkxv" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.698218 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.707691 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c85d579b5-svwjh"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.754114 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd72817a-eff0-4fac-ba2b-040115385897-logs\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.754182 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-combined-ca-bundle\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.754417 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data-custom\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.754500 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.754666 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nntg\" (UniqueName: \"kubernetes.io/projected/fd72817a-eff0-4fac-ba2b-040115385897-kube-api-access-5nntg\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.815201 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.816997 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.833525 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7ff7d98446-pts46"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.835019 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.838242 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.845958 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.854809 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7ff7d98446-pts46"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855690 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd72817a-eff0-4fac-ba2b-040115385897-logs\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855728 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-combined-ca-bundle\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855768 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855793 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855829 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9zhj\" (UniqueName: \"kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855855 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data-custom\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855882 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855914 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855934 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.855953 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nntg\" (UniqueName: \"kubernetes.io/projected/fd72817a-eff0-4fac-ba2b-040115385897-kube-api-access-5nntg\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.856177 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd72817a-eff0-4fac-ba2b-040115385897-logs\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.866560 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-combined-ca-bundle\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.868672 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data-custom\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.892351 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd72817a-eff0-4fac-ba2b-040115385897-config-data\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.897626 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nntg\" (UniqueName: \"kubernetes.io/projected/fd72817a-eff0-4fac-ba2b-040115385897-kube-api-access-5nntg\") pod \"barbican-worker-5c85d579b5-svwjh\" (UID: \"fd72817a-eff0-4fac-ba2b-040115385897\") " pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.933902 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d46697d68-frccf"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.935964 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.943163 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.954448 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d46697d68-frccf"] Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958111 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958180 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958243 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-combined-ca-bundle\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958281 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958317 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e21af86-2d45-409c-b692-97bc60c3d806-logs\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958363 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958390 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vp2d\" (UniqueName: \"kubernetes.io/projected/0e21af86-2d45-409c-b692-97bc60c3d806-kube-api-access-2vp2d\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958423 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958449 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data-custom\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.958498 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9zhj\" (UniqueName: \"kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.959080 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.959462 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.959953 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.961198 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:53 crc kubenswrapper[4979]: I0130 23:10:53.995123 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9zhj\" (UniqueName: \"kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj\") pod \"dnsmasq-dns-7968668d89-w7l26\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.017014 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c85d579b5-svwjh" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.060490 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e21af86-2d45-409c-b692-97bc60c3d806-logs\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.060860 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-combined-ca-bundle\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.060889 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f76ba6-bd87-414d-b226-07f7a8705fea-logs\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.060936 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vp2d\" (UniqueName: \"kubernetes.io/projected/0e21af86-2d45-409c-b692-97bc60c3d806-kube-api-access-2vp2d\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.060971 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data-custom\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061017 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061101 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn8lt\" (UniqueName: \"kubernetes.io/projected/58f76ba6-bd87-414d-b226-07f7a8705fea-kube-api-access-fn8lt\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061150 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data-custom\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061185 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-combined-ca-bundle\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061218 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.061933 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e21af86-2d45-409c-b692-97bc60c3d806-logs\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.067592 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data-custom\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.070458 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-config-data\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.079930 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e21af86-2d45-409c-b692-97bc60c3d806-combined-ca-bundle\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.080392 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vp2d\" (UniqueName: \"kubernetes.io/projected/0e21af86-2d45-409c-b692-97bc60c3d806-kube-api-access-2vp2d\") pod \"barbican-keystone-listener-7ff7d98446-pts46\" (UID: \"0e21af86-2d45-409c-b692-97bc60c3d806\") " pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.134919 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.153652 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.167414 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.168818 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn8lt\" (UniqueName: \"kubernetes.io/projected/58f76ba6-bd87-414d-b226-07f7a8705fea-kube-api-access-fn8lt\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.169371 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data-custom\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.170267 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-combined-ca-bundle\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.170338 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f76ba6-bd87-414d-b226-07f7a8705fea-logs\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.171400 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58f76ba6-bd87-414d-b226-07f7a8705fea-logs\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.174676 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data-custom\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.175527 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-config-data\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.183710 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f76ba6-bd87-414d-b226-07f7a8705fea-combined-ca-bundle\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.186590 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn8lt\" (UniqueName: \"kubernetes.io/projected/58f76ba6-bd87-414d-b226-07f7a8705fea-kube-api-access-fn8lt\") pod \"barbican-api-6d46697d68-frccf\" (UID: \"58f76ba6-bd87-414d-b226-07f7a8705fea\") " pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.263263 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.493713 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c85d579b5-svwjh"] Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.529322 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c85d579b5-svwjh" event={"ID":"fd72817a-eff0-4fac-ba2b-040115385897","Type":"ContainerStarted","Data":"ef96517a818a235b7731c6fe8e7babffd3efd15c42b1646d202d6a8b588429d0"} Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.649788 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:10:54 crc kubenswrapper[4979]: W0130 23:10:54.653956 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf646c27_e12e_47e1_b540_6f37012f4f48.slice/crio-c156003a45858a554feb9ea11361e86a2cdbe520f8d2253346927da0e77fcc37 WatchSource:0}: Error finding container c156003a45858a554feb9ea11361e86a2cdbe520f8d2253346927da0e77fcc37: Status 404 returned error can't find the container with id c156003a45858a554feb9ea11361e86a2cdbe520f8d2253346927da0e77fcc37 Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.940477 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7ff7d98446-pts46"] Jan 30 23:10:54 crc kubenswrapper[4979]: I0130 23:10:54.948622 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d46697d68-frccf"] Jan 30 23:10:54 crc kubenswrapper[4979]: W0130 23:10:54.964140 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e21af86_2d45_409c_b692_97bc60c3d806.slice/crio-9bee7c3cb7a023634a4dac31369babb6a9a73f0147583f34aff80219b5ccaf0b WatchSource:0}: Error finding container 9bee7c3cb7a023634a4dac31369babb6a9a73f0147583f34aff80219b5ccaf0b: Status 404 returned error can't find the container with id 9bee7c3cb7a023634a4dac31369babb6a9a73f0147583f34aff80219b5ccaf0b Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.538712 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" event={"ID":"0e21af86-2d45-409c-b692-97bc60c3d806","Type":"ContainerStarted","Data":"e49efcd48eea953dbbd9840b65e722e4f15136fec9106c84099e41a796d2dbad"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.538762 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" event={"ID":"0e21af86-2d45-409c-b692-97bc60c3d806","Type":"ContainerStarted","Data":"522eb023b7f322ca244a6d80b07aa9c52f273a7ff2be4ad9dba2a2f6be024a9d"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.538774 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" event={"ID":"0e21af86-2d45-409c-b692-97bc60c3d806","Type":"ContainerStarted","Data":"9bee7c3cb7a023634a4dac31369babb6a9a73f0147583f34aff80219b5ccaf0b"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.540787 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c85d579b5-svwjh" event={"ID":"fd72817a-eff0-4fac-ba2b-040115385897","Type":"ContainerStarted","Data":"59ea1ac095ff1d17f585dc428ba7608bbdb9e409cd5604db36f8018d15758212"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.540847 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c85d579b5-svwjh" event={"ID":"fd72817a-eff0-4fac-ba2b-040115385897","Type":"ContainerStarted","Data":"fdbce8fa2f629b54856b8da8d6ff1943f78d20c4f11ea88b09129c2115d93b27"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.542651 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d46697d68-frccf" event={"ID":"58f76ba6-bd87-414d-b226-07f7a8705fea","Type":"ContainerStarted","Data":"98e80f23f06c248467a2a0795206707914c7aa181e1dab69b59e6edd13acfd54"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.542679 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d46697d68-frccf" event={"ID":"58f76ba6-bd87-414d-b226-07f7a8705fea","Type":"ContainerStarted","Data":"e34e1ba4c4794ef5bae79690f36965d1e0d16deb645eea9dff77d7d2205f0623"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.542689 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d46697d68-frccf" event={"ID":"58f76ba6-bd87-414d-b226-07f7a8705fea","Type":"ContainerStarted","Data":"dbb1fdb38b494cfca275674ddd8d6d350e061f6778781c86373847ee87a9a560"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.542859 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.544551 4979 generic.go:334] "Generic (PLEG): container finished" podID="af646c27-e12e-47e1-b540-6f37012f4f48" containerID="2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40" exitCode=0 Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.544592 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7968668d89-w7l26" event={"ID":"af646c27-e12e-47e1-b540-6f37012f4f48","Type":"ContainerDied","Data":"2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.544707 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7968668d89-w7l26" event={"ID":"af646c27-e12e-47e1-b540-6f37012f4f48","Type":"ContainerStarted","Data":"c156003a45858a554feb9ea11361e86a2cdbe520f8d2253346927da0e77fcc37"} Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.564024 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7ff7d98446-pts46" podStartSLOduration=2.56400063 podStartE2EDuration="2.56400063s" podCreationTimestamp="2026-01-30 23:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:10:55.560900557 +0000 UTC m=+5451.522147590" watchObservedRunningTime="2026-01-30 23:10:55.56400063 +0000 UTC m=+5451.525247663" Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.580604 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d46697d68-frccf" podStartSLOduration=2.580584506 podStartE2EDuration="2.580584506s" podCreationTimestamp="2026-01-30 23:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:10:55.576500826 +0000 UTC m=+5451.537747859" watchObservedRunningTime="2026-01-30 23:10:55.580584506 +0000 UTC m=+5451.541831539" Jan 30 23:10:55 crc kubenswrapper[4979]: I0130 23:10:55.635574 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5c85d579b5-svwjh" podStartSLOduration=2.635554684 podStartE2EDuration="2.635554684s" podCreationTimestamp="2026-01-30 23:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:10:55.624694512 +0000 UTC m=+5451.585941545" watchObservedRunningTime="2026-01-30 23:10:55.635554684 +0000 UTC m=+5451.596801717" Jan 30 23:10:56 crc kubenswrapper[4979]: I0130 23:10:56.556828 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7968668d89-w7l26" event={"ID":"af646c27-e12e-47e1-b540-6f37012f4f48","Type":"ContainerStarted","Data":"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea"} Jan 30 23:10:56 crc kubenswrapper[4979]: I0130 23:10:56.557828 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:10:56 crc kubenswrapper[4979]: I0130 23:10:56.557851 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:11:02 crc kubenswrapper[4979]: I0130 23:11:02.039845 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:11:02 crc kubenswrapper[4979]: I0130 23:11:02.040786 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.138334 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.174796 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7968668d89-w7l26" podStartSLOduration=11.174763133 podStartE2EDuration="11.174763133s" podCreationTimestamp="2026-01-30 23:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:10:56.590143294 +0000 UTC m=+5452.551390327" watchObservedRunningTime="2026-01-30 23:11:04.174763133 +0000 UTC m=+5460.136010176" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.217989 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.220113 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7457648489-f9xxs" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="dnsmasq-dns" containerID="cri-o://99e085a00a239b14d311fb678f2e6ff2ee78f2fddb6b6103e4849b2212235ee3" gracePeriod=10 Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.641374 4979 generic.go:334] "Generic (PLEG): container finished" podID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerID="99e085a00a239b14d311fb678f2e6ff2ee78f2fddb6b6103e4849b2212235ee3" exitCode=0 Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.641453 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457648489-f9xxs" event={"ID":"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7","Type":"ContainerDied","Data":"99e085a00a239b14d311fb678f2e6ff2ee78f2fddb6b6103e4849b2212235ee3"} Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.738078 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.810323 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kckg6\" (UniqueName: \"kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6\") pod \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.810495 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc\") pod \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.810532 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb\") pod \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.810642 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb\") pod \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.810673 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config\") pod \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\" (UID: \"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7\") " Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.818544 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6" (OuterVolumeSpecName: "kube-api-access-kckg6") pod "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" (UID: "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7"). InnerVolumeSpecName "kube-api-access-kckg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.855284 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config" (OuterVolumeSpecName: "config") pod "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" (UID: "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.856140 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" (UID: "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.870209 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" (UID: "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.877237 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" (UID: "36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.914811 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kckg6\" (UniqueName: \"kubernetes.io/projected/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-kube-api-access-kckg6\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.914883 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.914909 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.914933 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:04 crc kubenswrapper[4979]: I0130 23:11:04.914957 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.650249 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457648489-f9xxs" event={"ID":"36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7","Type":"ContainerDied","Data":"cfb724e5a3cfea8fe7b3b514eba9b716012b887e4da9bc4289da05ab447f45c5"} Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.650647 4979 scope.go:117] "RemoveContainer" containerID="99e085a00a239b14d311fb678f2e6ff2ee78f2fddb6b6103e4849b2212235ee3" Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.650895 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457648489-f9xxs" Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.673116 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.679932 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7457648489-f9xxs"] Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.684049 4979 scope.go:117] "RemoveContainer" containerID="681ef7059193b0717b0eb969706fa681ca26f969cea9f506cb0573eaef292ba8" Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.757836 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:11:05 crc kubenswrapper[4979]: I0130 23:11:05.846010 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d46697d68-frccf" Jan 30 23:11:07 crc kubenswrapper[4979]: I0130 23:11:07.081094 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" path="/var/lib/kubelet/pods/36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7/volumes" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.499993 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2ltc5"] Jan 30 23:11:17 crc kubenswrapper[4979]: E0130 23:11:17.500939 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="init" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.500956 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="init" Jan 30 23:11:17 crc kubenswrapper[4979]: E0130 23:11:17.500980 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="dnsmasq-dns" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.500989 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="dnsmasq-dns" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.501208 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="36900afe-d3cd-4b93-8d7c-3d8d3a38f4f7" containerName="dnsmasq-dns" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.501894 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.516355 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2ltc5"] Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.604584 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5575-account-create-update-hrq7w"] Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.605909 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.609904 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.617449 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5575-account-create-update-hrq7w"] Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.671673 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbwx\" (UniqueName: \"kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.671769 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.772664 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.772778 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m948\" (UniqueName: \"kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.772803 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.772822 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxbwx\" (UniqueName: \"kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.773810 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.802993 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxbwx\" (UniqueName: \"kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx\") pod \"neutron-db-create-2ltc5\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.819568 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.874359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m948\" (UniqueName: \"kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.874837 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.875600 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.899721 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m948\" (UniqueName: \"kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948\") pod \"neutron-5575-account-create-update-hrq7w\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:17 crc kubenswrapper[4979]: I0130 23:11:17.927573 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.341266 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2ltc5"] Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.410396 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5575-account-create-update-hrq7w"] Jan 30 23:11:18 crc kubenswrapper[4979]: W0130 23:11:18.415424 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb871a72e_a648_4c40_b5eb_604c75307e21.slice/crio-a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb WatchSource:0}: Error finding container a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb: Status 404 returned error can't find the container with id a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.779381 4979 generic.go:334] "Generic (PLEG): container finished" podID="b871a72e-a648-4c40-b5eb-604c75307e21" containerID="6488fa6f75b07a884cb1c9e243ae1419c47f4f31507eee906fc6e83084e37e42" exitCode=0 Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.779719 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575-account-create-update-hrq7w" event={"ID":"b871a72e-a648-4c40-b5eb-604c75307e21","Type":"ContainerDied","Data":"6488fa6f75b07a884cb1c9e243ae1419c47f4f31507eee906fc6e83084e37e42"} Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.779753 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575-account-create-update-hrq7w" event={"ID":"b871a72e-a648-4c40-b5eb-604c75307e21","Type":"ContainerStarted","Data":"a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb"} Jan 30 23:11:18 crc kubenswrapper[4979]: E0130 23:11:18.781315 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92c4a95_be2f_4c0d_a789_f7505dcdfd97.slice/crio-7d37ab6343b96618c109dfaf1d8e673f2a0db0f5f37da07bff5cdaeffc9889e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92c4a95_be2f_4c0d_a789_f7505dcdfd97.slice/crio-conmon-7d37ab6343b96618c109dfaf1d8e673f2a0db0f5f37da07bff5cdaeffc9889e5.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.782796 4979 generic.go:334] "Generic (PLEG): container finished" podID="b92c4a95-be2f-4c0d-a789-f7505dcdfd97" containerID="7d37ab6343b96618c109dfaf1d8e673f2a0db0f5f37da07bff5cdaeffc9889e5" exitCode=0 Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.782823 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2ltc5" event={"ID":"b92c4a95-be2f-4c0d-a789-f7505dcdfd97","Type":"ContainerDied","Data":"7d37ab6343b96618c109dfaf1d8e673f2a0db0f5f37da07bff5cdaeffc9889e5"} Jan 30 23:11:18 crc kubenswrapper[4979]: I0130 23:11:18.782864 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2ltc5" event={"ID":"b92c4a95-be2f-4c0d-a789-f7505dcdfd97","Type":"ContainerStarted","Data":"850c228c9585efa67c6390fd7f34afcf8ba7838076a007fbe5a90b9d03314299"} Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.142572 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.229086 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.241940 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts\") pod \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.242078 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxbwx\" (UniqueName: \"kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx\") pod \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\" (UID: \"b92c4a95-be2f-4c0d-a789-f7505dcdfd97\") " Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.244147 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b92c4a95-be2f-4c0d-a789-f7505dcdfd97" (UID: "b92c4a95-be2f-4c0d-a789-f7505dcdfd97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.245136 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.252698 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx" (OuterVolumeSpecName: "kube-api-access-vxbwx") pod "b92c4a95-be2f-4c0d-a789-f7505dcdfd97" (UID: "b92c4a95-be2f-4c0d-a789-f7505dcdfd97"). InnerVolumeSpecName "kube-api-access-vxbwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.346188 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts\") pod \"b871a72e-a648-4c40-b5eb-604c75307e21\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.346424 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m948\" (UniqueName: \"kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948\") pod \"b871a72e-a648-4c40-b5eb-604c75307e21\" (UID: \"b871a72e-a648-4c40-b5eb-604c75307e21\") " Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.346672 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b871a72e-a648-4c40-b5eb-604c75307e21" (UID: "b871a72e-a648-4c40-b5eb-604c75307e21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.346804 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b871a72e-a648-4c40-b5eb-604c75307e21-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.346826 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxbwx\" (UniqueName: \"kubernetes.io/projected/b92c4a95-be2f-4c0d-a789-f7505dcdfd97-kube-api-access-vxbwx\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.349376 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948" (OuterVolumeSpecName: "kube-api-access-7m948") pod "b871a72e-a648-4c40-b5eb-604c75307e21" (UID: "b871a72e-a648-4c40-b5eb-604c75307e21"). InnerVolumeSpecName "kube-api-access-7m948". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.447860 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m948\" (UniqueName: \"kubernetes.io/projected/b871a72e-a648-4c40-b5eb-604c75307e21-kube-api-access-7m948\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.803597 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575-account-create-update-hrq7w" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.803640 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575-account-create-update-hrq7w" event={"ID":"b871a72e-a648-4c40-b5eb-604c75307e21","Type":"ContainerDied","Data":"a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb"} Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.803696 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4fce5c0b66fcc0cc1d16f6f0e0d6bc5ab2885b502b86b892847f5905b68f5cb" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.806898 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2ltc5" event={"ID":"b92c4a95-be2f-4c0d-a789-f7505dcdfd97","Type":"ContainerDied","Data":"850c228c9585efa67c6390fd7f34afcf8ba7838076a007fbe5a90b9d03314299"} Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.806954 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="850c228c9585efa67c6390fd7f34afcf8ba7838076a007fbe5a90b9d03314299" Jan 30 23:11:20 crc kubenswrapper[4979]: I0130 23:11:20.806969 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2ltc5" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.742150 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2w7hf"] Jan 30 23:11:22 crc kubenswrapper[4979]: E0130 23:11:22.742748 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92c4a95-be2f-4c0d-a789-f7505dcdfd97" containerName="mariadb-database-create" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.742759 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92c4a95-be2f-4c0d-a789-f7505dcdfd97" containerName="mariadb-database-create" Jan 30 23:11:22 crc kubenswrapper[4979]: E0130 23:11:22.742792 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b871a72e-a648-4c40-b5eb-604c75307e21" containerName="mariadb-account-create-update" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.742798 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b871a72e-a648-4c40-b5eb-604c75307e21" containerName="mariadb-account-create-update" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.742947 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b871a72e-a648-4c40-b5eb-604c75307e21" containerName="mariadb-account-create-update" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.742960 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92c4a95-be2f-4c0d-a789-f7505dcdfd97" containerName="mariadb-database-create" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.743552 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.749971 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.750549 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gzwzm" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.750804 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.756075 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2w7hf"] Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.890788 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vcp\" (UniqueName: \"kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.890904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.890932 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.992799 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96vcp\" (UniqueName: \"kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.992912 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.992941 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.998731 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:22 crc kubenswrapper[4979]: I0130 23:11:22.999445 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.015798 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96vcp\" (UniqueName: \"kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp\") pod \"neutron-db-sync-2w7hf\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.085584 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.562927 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2w7hf"] Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.843722 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2w7hf" event={"ID":"fc87a0f7-9b2b-46ce-a000-c1c5195535d8","Type":"ContainerStarted","Data":"75374755f204d179d6df7eb604fb78fdccdb8d7da4cf6f4f7c48a481ad71d134"} Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.845192 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2w7hf" event={"ID":"fc87a0f7-9b2b-46ce-a000-c1c5195535d8","Type":"ContainerStarted","Data":"572fe0142177b6810aeae3d7ced70d935d0c5b5c45be8044697b9d3495773b65"} Jan 30 23:11:23 crc kubenswrapper[4979]: I0130 23:11:23.862976 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2w7hf" podStartSLOduration=1.8629562179999999 podStartE2EDuration="1.862956218s" podCreationTimestamp="2026-01-30 23:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:11:23.858745235 +0000 UTC m=+5479.819992268" watchObservedRunningTime="2026-01-30 23:11:23.862956218 +0000 UTC m=+5479.824203261" Jan 30 23:11:26 crc kubenswrapper[4979]: I0130 23:11:26.608270 4979 scope.go:117] "RemoveContainer" containerID="2d0a143830dd73a91f1cb09ef9f3967be5ae0e4eb61c252cb0405d7e7fe00ec4" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.392226 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.399341 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.416263 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.483279 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.483438 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpl7f\" (UniqueName: \"kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.483490 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.584746 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl7f\" (UniqueName: \"kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.584804 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.584864 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.585655 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.585934 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.616718 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl7f\" (UniqueName: \"kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f\") pod \"redhat-marketplace-2jfl5\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.783227 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.887592 4979 generic.go:334] "Generic (PLEG): container finished" podID="fc87a0f7-9b2b-46ce-a000-c1c5195535d8" containerID="75374755f204d179d6df7eb604fb78fdccdb8d7da4cf6f4f7c48a481ad71d134" exitCode=0 Jan 30 23:11:27 crc kubenswrapper[4979]: I0130 23:11:27.887664 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2w7hf" event={"ID":"fc87a0f7-9b2b-46ce-a000-c1c5195535d8","Type":"ContainerDied","Data":"75374755f204d179d6df7eb604fb78fdccdb8d7da4cf6f4f7c48a481ad71d134"} Jan 30 23:11:28 crc kubenswrapper[4979]: I0130 23:11:28.290369 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:28 crc kubenswrapper[4979]: I0130 23:11:28.900686 4979 generic.go:334] "Generic (PLEG): container finished" podID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerID="c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872" exitCode=0 Jan 30 23:11:28 crc kubenswrapper[4979]: I0130 23:11:28.901247 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerDied","Data":"c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872"} Jan 30 23:11:28 crc kubenswrapper[4979]: I0130 23:11:28.901290 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerStarted","Data":"43983c64979bfec7b92346854621cf6924983a165cc6f7fb14746e77bc6dda46"} Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.252974 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.431979 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96vcp\" (UniqueName: \"kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp\") pod \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.432381 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config\") pod \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.432641 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle\") pod \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\" (UID: \"fc87a0f7-9b2b-46ce-a000-c1c5195535d8\") " Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.443306 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp" (OuterVolumeSpecName: "kube-api-access-96vcp") pod "fc87a0f7-9b2b-46ce-a000-c1c5195535d8" (UID: "fc87a0f7-9b2b-46ce-a000-c1c5195535d8"). InnerVolumeSpecName "kube-api-access-96vcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.453872 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config" (OuterVolumeSpecName: "config") pod "fc87a0f7-9b2b-46ce-a000-c1c5195535d8" (UID: "fc87a0f7-9b2b-46ce-a000-c1c5195535d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.457353 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc87a0f7-9b2b-46ce-a000-c1c5195535d8" (UID: "fc87a0f7-9b2b-46ce-a000-c1c5195535d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.535267 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.535295 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96vcp\" (UniqueName: \"kubernetes.io/projected/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-kube-api-access-96vcp\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.535308 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc87a0f7-9b2b-46ce-a000-c1c5195535d8-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.913800 4979 generic.go:334] "Generic (PLEG): container finished" podID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerID="37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da" exitCode=0 Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.913926 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerDied","Data":"37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da"} Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.916202 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2w7hf" event={"ID":"fc87a0f7-9b2b-46ce-a000-c1c5195535d8","Type":"ContainerDied","Data":"572fe0142177b6810aeae3d7ced70d935d0c5b5c45be8044697b9d3495773b65"} Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.916233 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="572fe0142177b6810aeae3d7ced70d935d0c5b5c45be8044697b9d3495773b65" Jan 30 23:11:29 crc kubenswrapper[4979]: I0130 23:11:29.916243 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2w7hf" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.055475 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:11:30 crc kubenswrapper[4979]: E0130 23:11:30.056080 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc87a0f7-9b2b-46ce-a000-c1c5195535d8" containerName="neutron-db-sync" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.056127 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc87a0f7-9b2b-46ce-a000-c1c5195535d8" containerName="neutron-db-sync" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.056269 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc87a0f7-9b2b-46ce-a000-c1c5195535d8" containerName="neutron-db-sync" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.057198 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.072663 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.134834 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-998b6c5dc-s8h29"] Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.136500 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.140392 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gzwzm" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.140402 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.140501 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.148488 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.148543 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.148579 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qx8f\" (UniqueName: \"kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.148734 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.148764 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.153449 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-998b6c5dc-s8h29"] Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.250272 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.250343 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-combined-ca-bundle\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.250392 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.250414 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.251381 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.251395 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.251425 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.252441 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.253083 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.253427 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qx8f\" (UniqueName: \"kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.253785 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-httpd-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.254113 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkxb\" (UniqueName: \"kubernetes.io/projected/633158e6-5d40-43e2-a2c9-94e611b32d3c-kube-api-access-xmkxb\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.254154 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.278199 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qx8f\" (UniqueName: \"kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f\") pod \"dnsmasq-dns-6f6b95d565-xrrwt\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.356750 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-httpd-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.356832 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmkxb\" (UniqueName: \"kubernetes.io/projected/633158e6-5d40-43e2-a2c9-94e611b32d3c-kube-api-access-xmkxb\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.356863 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.356946 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-combined-ca-bundle\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.360327 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-httpd-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.360742 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-combined-ca-bundle\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.366222 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/633158e6-5d40-43e2-a2c9-94e611b32d3c-config\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.375769 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmkxb\" (UniqueName: \"kubernetes.io/projected/633158e6-5d40-43e2-a2c9-94e611b32d3c-kube-api-access-xmkxb\") pod \"neutron-998b6c5dc-s8h29\" (UID: \"633158e6-5d40-43e2-a2c9-94e611b32d3c\") " pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.386310 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.452350 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.682842 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:11:30 crc kubenswrapper[4979]: W0130 23:11:30.688053 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e29f1a4_dca0_42b8_8ee9_e040433dad76.slice/crio-4760175f4dd061a01c395f5191d4dc74af0c065922b917876039e3461e28ddb3 WatchSource:0}: Error finding container 4760175f4dd061a01c395f5191d4dc74af0c065922b917876039e3461e28ddb3: Status 404 returned error can't find the container with id 4760175f4dd061a01c395f5191d4dc74af0c065922b917876039e3461e28ddb3 Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.926574 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerStarted","Data":"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb"} Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.928679 4979 generic.go:334] "Generic (PLEG): container finished" podID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerID="70e7a3e289c9bede605a4d28f895b056899de6dff342f7658a2ae4deec0c89ae" exitCode=0 Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.928796 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" event={"ID":"8e29f1a4-dca0-42b8-8ee9-e040433dad76","Type":"ContainerDied","Data":"70e7a3e289c9bede605a4d28f895b056899de6dff342f7658a2ae4deec0c89ae"} Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.928885 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" event={"ID":"8e29f1a4-dca0-42b8-8ee9-e040433dad76","Type":"ContainerStarted","Data":"4760175f4dd061a01c395f5191d4dc74af0c065922b917876039e3461e28ddb3"} Jan 30 23:11:30 crc kubenswrapper[4979]: I0130 23:11:30.960897 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2jfl5" podStartSLOduration=2.561466347 podStartE2EDuration="3.960878789s" podCreationTimestamp="2026-01-30 23:11:27 +0000 UTC" firstStartedPulling="2026-01-30 23:11:28.904737347 +0000 UTC m=+5484.865984380" lastFinishedPulling="2026-01-30 23:11:30.304149789 +0000 UTC m=+5486.265396822" observedRunningTime="2026-01-30 23:11:30.953665765 +0000 UTC m=+5486.914912798" watchObservedRunningTime="2026-01-30 23:11:30.960878789 +0000 UTC m=+5486.922125822" Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.116122 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-998b6c5dc-s8h29"] Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.937416 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-998b6c5dc-s8h29" event={"ID":"633158e6-5d40-43e2-a2c9-94e611b32d3c","Type":"ContainerStarted","Data":"63a6cbc50456cd54d75b72df942bd53620222a8e491b9fd4f175d83f073eb9ca"} Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.937781 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.937801 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-998b6c5dc-s8h29" event={"ID":"633158e6-5d40-43e2-a2c9-94e611b32d3c","Type":"ContainerStarted","Data":"7fbca4c6e87321c1e1bd6191d243f2aa3ef7c9d61539e92cf7b446c886753606"} Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.937816 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-998b6c5dc-s8h29" event={"ID":"633158e6-5d40-43e2-a2c9-94e611b32d3c","Type":"ContainerStarted","Data":"c2026a78dc728c6cb76aaa32fb1d71ec9beb7127f311151b50da5fb60fde77dd"} Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.939634 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" event={"ID":"8e29f1a4-dca0-42b8-8ee9-e040433dad76","Type":"ContainerStarted","Data":"1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4"} Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.939947 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:31 crc kubenswrapper[4979]: I0130 23:11:31.956901 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-998b6c5dc-s8h29" podStartSLOduration=1.956876092 podStartE2EDuration="1.956876092s" podCreationTimestamp="2026-01-30 23:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:11:31.953148822 +0000 UTC m=+5487.914395865" watchObservedRunningTime="2026-01-30 23:11:31.956876092 +0000 UTC m=+5487.918123125" Jan 30 23:11:32 crc kubenswrapper[4979]: I0130 23:11:32.039753 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:11:32 crc kubenswrapper[4979]: I0130 23:11:32.039805 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:11:37 crc kubenswrapper[4979]: I0130 23:11:37.783712 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:37 crc kubenswrapper[4979]: I0130 23:11:37.785566 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:37 crc kubenswrapper[4979]: I0130 23:11:37.912085 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:37 crc kubenswrapper[4979]: I0130 23:11:37.958007 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" podStartSLOduration=7.957984538 podStartE2EDuration="7.957984538s" podCreationTimestamp="2026-01-30 23:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:11:31.976230773 +0000 UTC m=+5487.937477796" watchObservedRunningTime="2026-01-30 23:11:37.957984538 +0000 UTC m=+5493.919231571" Jan 30 23:11:38 crc kubenswrapper[4979]: I0130 23:11:38.048567 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:38 crc kubenswrapper[4979]: I0130 23:11:38.438606 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.007836 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2jfl5" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="registry-server" containerID="cri-o://8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb" gracePeriod=2 Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.388170 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.441133 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.441426 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7968668d89-w7l26" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="dnsmasq-dns" containerID="cri-o://f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea" gracePeriod=10 Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.465867 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.563700 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities\") pod \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.563813 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content\") pod \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.563948 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpl7f\" (UniqueName: \"kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f\") pod \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\" (UID: \"712bebd9-29c5-4d26-b254-b7d1dfdb8292\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.565410 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities" (OuterVolumeSpecName: "utilities") pod "712bebd9-29c5-4d26-b254-b7d1dfdb8292" (UID: "712bebd9-29c5-4d26-b254-b7d1dfdb8292"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.573784 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f" (OuterVolumeSpecName: "kube-api-access-lpl7f") pod "712bebd9-29c5-4d26-b254-b7d1dfdb8292" (UID: "712bebd9-29c5-4d26-b254-b7d1dfdb8292"). InnerVolumeSpecName "kube-api-access-lpl7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.595648 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "712bebd9-29c5-4d26-b254-b7d1dfdb8292" (UID: "712bebd9-29c5-4d26-b254-b7d1dfdb8292"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.665566 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.665602 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpl7f\" (UniqueName: \"kubernetes.io/projected/712bebd9-29c5-4d26-b254-b7d1dfdb8292-kube-api-access-lpl7f\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.665614 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712bebd9-29c5-4d26-b254-b7d1dfdb8292-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.854420 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.970102 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb\") pod \"af646c27-e12e-47e1-b540-6f37012f4f48\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.970218 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config\") pod \"af646c27-e12e-47e1-b540-6f37012f4f48\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.970247 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9zhj\" (UniqueName: \"kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj\") pod \"af646c27-e12e-47e1-b540-6f37012f4f48\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.970296 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc\") pod \"af646c27-e12e-47e1-b540-6f37012f4f48\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.970355 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb\") pod \"af646c27-e12e-47e1-b540-6f37012f4f48\" (UID: \"af646c27-e12e-47e1-b540-6f37012f4f48\") " Jan 30 23:11:40 crc kubenswrapper[4979]: I0130 23:11:40.974161 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj" (OuterVolumeSpecName: "kube-api-access-h9zhj") pod "af646c27-e12e-47e1-b540-6f37012f4f48" (UID: "af646c27-e12e-47e1-b540-6f37012f4f48"). InnerVolumeSpecName "kube-api-access-h9zhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.027186 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af646c27-e12e-47e1-b540-6f37012f4f48" (UID: "af646c27-e12e-47e1-b540-6f37012f4f48"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.027535 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af646c27-e12e-47e1-b540-6f37012f4f48" (UID: "af646c27-e12e-47e1-b540-6f37012f4f48"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028498 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config" (OuterVolumeSpecName: "config") pod "af646c27-e12e-47e1-b540-6f37012f4f48" (UID: "af646c27-e12e-47e1-b540-6f37012f4f48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028494 4979 generic.go:334] "Generic (PLEG): container finished" podID="af646c27-e12e-47e1-b540-6f37012f4f48" containerID="f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea" exitCode=0 Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028521 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7968668d89-w7l26" event={"ID":"af646c27-e12e-47e1-b540-6f37012f4f48","Type":"ContainerDied","Data":"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea"} Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028556 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7968668d89-w7l26" event={"ID":"af646c27-e12e-47e1-b540-6f37012f4f48","Type":"ContainerDied","Data":"c156003a45858a554feb9ea11361e86a2cdbe520f8d2253346927da0e77fcc37"} Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028579 4979 scope.go:117] "RemoveContainer" containerID="f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.028602 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7968668d89-w7l26" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.032409 4979 generic.go:334] "Generic (PLEG): container finished" podID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerID="8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb" exitCode=0 Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.032444 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerDied","Data":"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb"} Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.032468 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jfl5" event={"ID":"712bebd9-29c5-4d26-b254-b7d1dfdb8292","Type":"ContainerDied","Data":"43983c64979bfec7b92346854621cf6924983a165cc6f7fb14746e77bc6dda46"} Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.032519 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jfl5" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.033616 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af646c27-e12e-47e1-b540-6f37012f4f48" (UID: "af646c27-e12e-47e1-b540-6f37012f4f48"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.062162 4979 scope.go:117] "RemoveContainer" containerID="2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.082784 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.082811 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9zhj\" (UniqueName: \"kubernetes.io/projected/af646c27-e12e-47e1-b540-6f37012f4f48-kube-api-access-h9zhj\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.082820 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.082830 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.082839 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af646c27-e12e-47e1-b540-6f37012f4f48-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.087727 4979 scope.go:117] "RemoveContainer" containerID="f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea" Jan 30 23:11:41 crc kubenswrapper[4979]: E0130 23:11:41.088141 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea\": container with ID starting with f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea not found: ID does not exist" containerID="f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.088186 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea"} err="failed to get container status \"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea\": rpc error: code = NotFound desc = could not find container \"f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea\": container with ID starting with f4e6345c644b2c9fa6fa101a6be61c1d6781c08a311da9d0944cc751e57992ea not found: ID does not exist" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.088208 4979 scope.go:117] "RemoveContainer" containerID="2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40" Jan 30 23:11:41 crc kubenswrapper[4979]: E0130 23:11:41.088415 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40\": container with ID starting with 2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40 not found: ID does not exist" containerID="2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.088439 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40"} err="failed to get container status \"2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40\": rpc error: code = NotFound desc = could not find container \"2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40\": container with ID starting with 2603012da1851fe7d140544a80c53dabea6382d70e9d36eadbe5af2428d6ac40 not found: ID does not exist" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.088455 4979 scope.go:117] "RemoveContainer" containerID="8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.100467 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.100514 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jfl5"] Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.116871 4979 scope.go:117] "RemoveContainer" containerID="37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.180434 4979 scope.go:117] "RemoveContainer" containerID="c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.210875 4979 scope.go:117] "RemoveContainer" containerID="8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb" Jan 30 23:11:41 crc kubenswrapper[4979]: E0130 23:11:41.211357 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb\": container with ID starting with 8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb not found: ID does not exist" containerID="8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.211406 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb"} err="failed to get container status \"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb\": rpc error: code = NotFound desc = could not find container \"8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb\": container with ID starting with 8fb0f728e9d2db33d0e46736881a4dd5ea44214f013bac45ed5775911c5cfebb not found: ID does not exist" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.211432 4979 scope.go:117] "RemoveContainer" containerID="37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da" Jan 30 23:11:41 crc kubenswrapper[4979]: E0130 23:11:41.211749 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da\": container with ID starting with 37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da not found: ID does not exist" containerID="37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.211779 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da"} err="failed to get container status \"37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da\": rpc error: code = NotFound desc = could not find container \"37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da\": container with ID starting with 37d117de83334c24f9259298ab74a52fa95bfeaec896cff702ae9b6c361e90da not found: ID does not exist" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.211801 4979 scope.go:117] "RemoveContainer" containerID="c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872" Jan 30 23:11:41 crc kubenswrapper[4979]: E0130 23:11:41.211994 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872\": container with ID starting with c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872 not found: ID does not exist" containerID="c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.212014 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872"} err="failed to get container status \"c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872\": rpc error: code = NotFound desc = could not find container \"c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872\": container with ID starting with c1dbe4ee5b7191badd4d4dd2a426eb2857583b7aa6138d295f96116a1d9f5872 not found: ID does not exist" Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.351641 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:11:41 crc kubenswrapper[4979]: I0130 23:11:41.358383 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7968668d89-w7l26"] Jan 30 23:11:43 crc kubenswrapper[4979]: I0130 23:11:43.085208 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" path="/var/lib/kubelet/pods/712bebd9-29c5-4d26-b254-b7d1dfdb8292/volumes" Jan 30 23:11:43 crc kubenswrapper[4979]: I0130 23:11:43.087083 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" path="/var/lib/kubelet/pods/af646c27-e12e-47e1-b540-6f37012f4f48/volumes" Jan 30 23:12:00 crc kubenswrapper[4979]: I0130 23:12:00.462818 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-998b6c5dc-s8h29" Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.039718 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.040489 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.040564 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.041661 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.041801 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19" gracePeriod=600 Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.383071 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19" exitCode=0 Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.383163 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19"} Jan 30 23:12:02 crc kubenswrapper[4979]: I0130 23:12:02.383206 4979 scope.go:117] "RemoveContainer" containerID="b0d79a41b8fecc227eed2095595aaada599002f8e10259ae34e3b02148a0bed6" Jan 30 23:12:03 crc kubenswrapper[4979]: I0130 23:12:03.393791 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f"} Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.227134 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ds8kf"] Jan 30 23:12:07 crc kubenswrapper[4979]: E0130 23:12:07.227919 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="registry-server" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.227932 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="registry-server" Jan 30 23:12:07 crc kubenswrapper[4979]: E0130 23:12:07.227948 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="dnsmasq-dns" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.227954 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="dnsmasq-dns" Jan 30 23:12:07 crc kubenswrapper[4979]: E0130 23:12:07.227975 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="extract-utilities" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.227982 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="extract-utilities" Jan 30 23:12:07 crc kubenswrapper[4979]: E0130 23:12:07.227995 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="init" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.228000 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="init" Jan 30 23:12:07 crc kubenswrapper[4979]: E0130 23:12:07.228013 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="extract-content" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.228019 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="extract-content" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.228208 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="af646c27-e12e-47e1-b540-6f37012f4f48" containerName="dnsmasq-dns" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.228237 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="712bebd9-29c5-4d26-b254-b7d1dfdb8292" containerName="registry-server" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.228794 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.244237 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ds8kf"] Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.327193 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-dc54-account-create-update-qv7gj"] Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.328225 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.331582 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.337636 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-dc54-account-create-update-qv7gj"] Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.356162 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kw5p\" (UniqueName: \"kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.356487 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.457931 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kw5p\" (UniqueName: \"kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.457989 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlq8d\" (UniqueName: \"kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.458047 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.458106 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.458753 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.476491 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kw5p\" (UniqueName: \"kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p\") pod \"glance-db-create-ds8kf\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.550430 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.559674 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlq8d\" (UniqueName: \"kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.559745 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.560603 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.578739 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlq8d\" (UniqueName: \"kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d\") pod \"glance-dc54-account-create-update-qv7gj\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:07 crc kubenswrapper[4979]: I0130 23:12:07.652498 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.016480 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ds8kf"] Jan 30 23:12:08 crc kubenswrapper[4979]: W0130 23:12:08.024322 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90346f0c_7cc3_4f3c_a29f_9b7265eff703.slice/crio-3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221 WatchSource:0}: Error finding container 3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221: Status 404 returned error can't find the container with id 3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221 Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.134661 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-dc54-account-create-update-qv7gj"] Jan 30 23:12:08 crc kubenswrapper[4979]: W0130 23:12:08.136554 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59dad3f6_f4ce_4ce7_8364_044694d448f1.slice/crio-8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39 WatchSource:0}: Error finding container 8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39: Status 404 returned error can't find the container with id 8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39 Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.434348 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dc54-account-create-update-qv7gj" event={"ID":"59dad3f6-f4ce-4ce7-8364-044694d448f1","Type":"ContainerStarted","Data":"1029e32864f04940f8e059d045d4582115f310e97f7c3c3262b89f2a7fc67ed7"} Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.434405 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dc54-account-create-update-qv7gj" event={"ID":"59dad3f6-f4ce-4ce7-8364-044694d448f1","Type":"ContainerStarted","Data":"8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39"} Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.437939 4979 generic.go:334] "Generic (PLEG): container finished" podID="90346f0c-7cc3-4f3c-a29f-9b7265eff703" containerID="22f97911fc2dfbe2d7800553503f0c8338bac7f33443e8a617f5b406e5bdc412" exitCode=0 Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.437983 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ds8kf" event={"ID":"90346f0c-7cc3-4f3c-a29f-9b7265eff703","Type":"ContainerDied","Data":"22f97911fc2dfbe2d7800553503f0c8338bac7f33443e8a617f5b406e5bdc412"} Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.438009 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ds8kf" event={"ID":"90346f0c-7cc3-4f3c-a29f-9b7265eff703","Type":"ContainerStarted","Data":"3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221"} Jan 30 23:12:08 crc kubenswrapper[4979]: I0130 23:12:08.452361 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-dc54-account-create-update-qv7gj" podStartSLOduration=1.452334702 podStartE2EDuration="1.452334702s" podCreationTimestamp="2026-01-30 23:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:08.450624825 +0000 UTC m=+5524.411871858" watchObservedRunningTime="2026-01-30 23:12:08.452334702 +0000 UTC m=+5524.413581735" Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.447689 4979 generic.go:334] "Generic (PLEG): container finished" podID="59dad3f6-f4ce-4ce7-8364-044694d448f1" containerID="1029e32864f04940f8e059d045d4582115f310e97f7c3c3262b89f2a7fc67ed7" exitCode=0 Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.447851 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dc54-account-create-update-qv7gj" event={"ID":"59dad3f6-f4ce-4ce7-8364-044694d448f1","Type":"ContainerDied","Data":"1029e32864f04940f8e059d045d4582115f310e97f7c3c3262b89f2a7fc67ed7"} Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.842281 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.904473 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts\") pod \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.904608 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kw5p\" (UniqueName: \"kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p\") pod \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\" (UID: \"90346f0c-7cc3-4f3c-a29f-9b7265eff703\") " Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.905502 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90346f0c-7cc3-4f3c-a29f-9b7265eff703" (UID: "90346f0c-7cc3-4f3c-a29f-9b7265eff703"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:09 crc kubenswrapper[4979]: I0130 23:12:09.910824 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p" (OuterVolumeSpecName: "kube-api-access-8kw5p") pod "90346f0c-7cc3-4f3c-a29f-9b7265eff703" (UID: "90346f0c-7cc3-4f3c-a29f-9b7265eff703"). InnerVolumeSpecName "kube-api-access-8kw5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.007556 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kw5p\" (UniqueName: \"kubernetes.io/projected/90346f0c-7cc3-4f3c-a29f-9b7265eff703-kube-api-access-8kw5p\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.007589 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90346f0c-7cc3-4f3c-a29f-9b7265eff703-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.462149 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ds8kf" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.462097 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ds8kf" event={"ID":"90346f0c-7cc3-4f3c-a29f-9b7265eff703","Type":"ContainerDied","Data":"3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221"} Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.462239 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ce4e8a0b80e74cf6b7855adeab8bb5abf046c11861a1831d67a9b499ae3a221" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.821425 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.822128 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlq8d\" (UniqueName: \"kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d\") pod \"59dad3f6-f4ce-4ce7-8364-044694d448f1\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.835645 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d" (OuterVolumeSpecName: "kube-api-access-hlq8d") pod "59dad3f6-f4ce-4ce7-8364-044694d448f1" (UID: "59dad3f6-f4ce-4ce7-8364-044694d448f1"). InnerVolumeSpecName "kube-api-access-hlq8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.924112 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts\") pod \"59dad3f6-f4ce-4ce7-8364-044694d448f1\" (UID: \"59dad3f6-f4ce-4ce7-8364-044694d448f1\") " Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.924700 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlq8d\" (UniqueName: \"kubernetes.io/projected/59dad3f6-f4ce-4ce7-8364-044694d448f1-kube-api-access-hlq8d\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:10 crc kubenswrapper[4979]: I0130 23:12:10.924913 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59dad3f6-f4ce-4ce7-8364-044694d448f1" (UID: "59dad3f6-f4ce-4ce7-8364-044694d448f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:11 crc kubenswrapper[4979]: I0130 23:12:11.026089 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dad3f6-f4ce-4ce7-8364-044694d448f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:11 crc kubenswrapper[4979]: I0130 23:12:11.474171 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-dc54-account-create-update-qv7gj" event={"ID":"59dad3f6-f4ce-4ce7-8364-044694d448f1","Type":"ContainerDied","Data":"8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39"} Jan 30 23:12:11 crc kubenswrapper[4979]: I0130 23:12:11.474538 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8454083d18221e0dffb508f2cb54444be1cd260a80afce7f9f8aaee8c12e8c39" Jan 30 23:12:11 crc kubenswrapper[4979]: I0130 23:12:11.474225 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-dc54-account-create-update-qv7gj" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.586721 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xz2wl"] Jan 30 23:12:12 crc kubenswrapper[4979]: E0130 23:12:12.587087 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90346f0c-7cc3-4f3c-a29f-9b7265eff703" containerName="mariadb-database-create" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.587100 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="90346f0c-7cc3-4f3c-a29f-9b7265eff703" containerName="mariadb-database-create" Jan 30 23:12:12 crc kubenswrapper[4979]: E0130 23:12:12.587116 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59dad3f6-f4ce-4ce7-8364-044694d448f1" containerName="mariadb-account-create-update" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.587122 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="59dad3f6-f4ce-4ce7-8364-044694d448f1" containerName="mariadb-account-create-update" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.587314 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="90346f0c-7cc3-4f3c-a29f-9b7265eff703" containerName="mariadb-database-create" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.587339 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="59dad3f6-f4ce-4ce7-8364-044694d448f1" containerName="mariadb-account-create-update" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.587949 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.590674 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.590699 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-92r4q" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.658112 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.658181 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6d2\" (UniqueName: \"kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.658207 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.658299 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.661105 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xz2wl"] Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.760265 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6d2\" (UniqueName: \"kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.760308 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.760385 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.760461 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.765901 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.768411 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.779135 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.781737 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6d2\" (UniqueName: \"kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2\") pod \"glance-db-sync-xz2wl\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:12 crc kubenswrapper[4979]: I0130 23:12:12.906316 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:13 crc kubenswrapper[4979]: I0130 23:12:13.440104 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xz2wl"] Jan 30 23:12:13 crc kubenswrapper[4979]: I0130 23:12:13.495277 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xz2wl" event={"ID":"531879a6-b909-4e84-bb7d-9d4e94c5e7f4","Type":"ContainerStarted","Data":"ae8a116b6fb783c60e7fb64f62534ab45339b2fca0b155394852d5289ad5a6fa"} Jan 30 23:12:14 crc kubenswrapper[4979]: I0130 23:12:14.502838 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xz2wl" event={"ID":"531879a6-b909-4e84-bb7d-9d4e94c5e7f4","Type":"ContainerStarted","Data":"6fa2af1e71f672ff07c7a8ecac5619dbc74e480f88cacfdf3bd6126656652ae7"} Jan 30 23:12:17 crc kubenswrapper[4979]: I0130 23:12:17.528272 4979 generic.go:334] "Generic (PLEG): container finished" podID="531879a6-b909-4e84-bb7d-9d4e94c5e7f4" containerID="6fa2af1e71f672ff07c7a8ecac5619dbc74e480f88cacfdf3bd6126656652ae7" exitCode=0 Jan 30 23:12:17 crc kubenswrapper[4979]: I0130 23:12:17.528347 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xz2wl" event={"ID":"531879a6-b909-4e84-bb7d-9d4e94c5e7f4","Type":"ContainerDied","Data":"6fa2af1e71f672ff07c7a8ecac5619dbc74e480f88cacfdf3bd6126656652ae7"} Jan 30 23:12:18 crc kubenswrapper[4979]: I0130 23:12:18.993683 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.080594 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data\") pod \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.080757 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle\") pod \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.080830 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm6d2\" (UniqueName: \"kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2\") pod \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.080935 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data\") pod \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\" (UID: \"531879a6-b909-4e84-bb7d-9d4e94c5e7f4\") " Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.085782 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "531879a6-b909-4e84-bb7d-9d4e94c5e7f4" (UID: "531879a6-b909-4e84-bb7d-9d4e94c5e7f4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.086548 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2" (OuterVolumeSpecName: "kube-api-access-fm6d2") pod "531879a6-b909-4e84-bb7d-9d4e94c5e7f4" (UID: "531879a6-b909-4e84-bb7d-9d4e94c5e7f4"). InnerVolumeSpecName "kube-api-access-fm6d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.102852 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "531879a6-b909-4e84-bb7d-9d4e94c5e7f4" (UID: "531879a6-b909-4e84-bb7d-9d4e94c5e7f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.128348 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data" (OuterVolumeSpecName: "config-data") pod "531879a6-b909-4e84-bb7d-9d4e94c5e7f4" (UID: "531879a6-b909-4e84-bb7d-9d4e94c5e7f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.183473 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm6d2\" (UniqueName: \"kubernetes.io/projected/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-kube-api-access-fm6d2\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.183511 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.183544 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.183557 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531879a6-b909-4e84-bb7d-9d4e94c5e7f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.562247 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xz2wl" event={"ID":"531879a6-b909-4e84-bb7d-9d4e94c5e7f4","Type":"ContainerDied","Data":"ae8a116b6fb783c60e7fb64f62534ab45339b2fca0b155394852d5289ad5a6fa"} Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.562359 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae8a116b6fb783c60e7fb64f62534ab45339b2fca0b155394852d5289ad5a6fa" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.562293 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xz2wl" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.940842 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:12:19 crc kubenswrapper[4979]: E0130 23:12:19.941642 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531879a6-b909-4e84-bb7d-9d4e94c5e7f4" containerName="glance-db-sync" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.941665 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="531879a6-b909-4e84-bb7d-9d4e94c5e7f4" containerName="glance-db-sync" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.941884 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="531879a6-b909-4e84-bb7d-9d4e94c5e7f4" containerName="glance-db-sync" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.942998 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:19 crc kubenswrapper[4979]: I0130 23:12:19.957015 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.005812 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.005904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg5lj\" (UniqueName: \"kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.005960 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.005987 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.006054 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.006151 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.007955 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.023609 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.023631 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.023903 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-92r4q" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.024643 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.045010 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108592 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108650 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108701 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108755 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108777 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108815 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108891 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg5lj\" (UniqueName: \"kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108933 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108972 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.108997 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.109054 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.109091 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbnc\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.110378 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.110424 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.113003 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.115934 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.125793 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.127372 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.135937 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.147046 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg5lj\" (UniqueName: \"kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj\") pod \"dnsmasq-dns-7d69cf7c75-5qv59\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.147533 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210435 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210527 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210553 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210576 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210606 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqs4\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210636 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210667 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210718 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210810 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.210992 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlbnc\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211155 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211158 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211296 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211354 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.211574 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.215486 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.216251 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.216440 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.228698 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlbnc\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.228776 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.275434 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.314209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.314482 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.314518 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.316386 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.316870 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.317326 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqs4\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.317391 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.317798 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.317878 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.318512 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.320185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.320305 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.320669 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.333805 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.338215 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqs4\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4\") pod \"glance-default-internal-api-0\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.482083 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.801142 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:12:20 crc kubenswrapper[4979]: I0130 23:12:20.941415 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.288212 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:21 crc kubenswrapper[4979]: W0130 23:12:21.292559 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd22d21d5_bd0b_4ad6_bd03_d024ff808850.slice/crio-16f0bc2ad94e74fdba0d19d07447d9b4a1d5bd0d0089f365beb394d9a6ccc6e4 WatchSource:0}: Error finding container 16f0bc2ad94e74fdba0d19d07447d9b4a1d5bd0d0089f365beb394d9a6ccc6e4: Status 404 returned error can't find the container with id 16f0bc2ad94e74fdba0d19d07447d9b4a1d5bd0d0089f365beb394d9a6ccc6e4 Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.379528 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.602995 4979 generic.go:334] "Generic (PLEG): container finished" podID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerID="ce550ab1c6e408aea10d06173b7920d5c55fe0078943da671c3598da2665ca61" exitCode=0 Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.603144 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" event={"ID":"2ab8f580-9bea-44a9-a732-ef54cf9eef47","Type":"ContainerDied","Data":"ce550ab1c6e408aea10d06173b7920d5c55fe0078943da671c3598da2665ca61"} Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.603229 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" event={"ID":"2ab8f580-9bea-44a9-a732-ef54cf9eef47","Type":"ContainerStarted","Data":"e0a67ae163b7249d5be022ae19e45164ea2e40a675fe276f1793defd9586ab15"} Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.608642 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerStarted","Data":"ea71b32de48e4a49d45264b0aa2f381b0548149ec8a0db018450e3df0c20f8ae"} Jan 30 23:12:21 crc kubenswrapper[4979]: I0130 23:12:21.609894 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerStarted","Data":"16f0bc2ad94e74fdba0d19d07447d9b4a1d5bd0d0089f365beb394d9a6ccc6e4"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.618995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerStarted","Data":"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.619727 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerStarted","Data":"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.620648 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" event={"ID":"2ab8f580-9bea-44a9-a732-ef54cf9eef47","Type":"ContainerStarted","Data":"949025542d878f0aec57178ae4767449919585cb47ec404495f570b3fe0d8899"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.620750 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.623116 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerStarted","Data":"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.623140 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerStarted","Data":"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9"} Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.623219 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-log" containerID="cri-o://81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" gracePeriod=30 Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.623271 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-httpd" containerID="cri-o://6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" gracePeriod=30 Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.652966 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.652941171 podStartE2EDuration="2.652941171s" podCreationTimestamp="2026-01-30 23:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:22.646196619 +0000 UTC m=+5538.607443642" watchObservedRunningTime="2026-01-30 23:12:22.652941171 +0000 UTC m=+5538.614188204" Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.671206 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" podStartSLOduration=3.671190511 podStartE2EDuration="3.671190511s" podCreationTimestamp="2026-01-30 23:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:22.665673203 +0000 UTC m=+5538.626920236" watchObservedRunningTime="2026-01-30 23:12:22.671190511 +0000 UTC m=+5538.632437544" Jan 30 23:12:22 crc kubenswrapper[4979]: I0130 23:12:22.699159 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.699137633 podStartE2EDuration="3.699137633s" podCreationTimestamp="2026-01-30 23:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:22.689672888 +0000 UTC m=+5538.650919941" watchObservedRunningTime="2026-01-30 23:12:22.699137633 +0000 UTC m=+5538.660384666" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.189134 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281246 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281314 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281347 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281400 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281433 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlbnc\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281514 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.281549 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts\") pod \"b6723bda-cb31-4951-b243-9a358b8e65f0\" (UID: \"b6723bda-cb31-4951-b243-9a358b8e65f0\") " Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.283465 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs" (OuterVolumeSpecName: "logs") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.283920 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.288506 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts" (OuterVolumeSpecName: "scripts") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.288549 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph" (OuterVolumeSpecName: "ceph") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.289463 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc" (OuterVolumeSpecName: "kube-api-access-zlbnc") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "kube-api-access-zlbnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.328328 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.342628 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data" (OuterVolumeSpecName: "config-data") pod "b6723bda-cb31-4951-b243-9a358b8e65f0" (UID: "b6723bda-cb31-4951-b243-9a358b8e65f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.382994 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383041 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlbnc\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-kube-api-access-zlbnc\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383054 4979 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6723bda-cb31-4951-b243-9a358b8e65f0-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383064 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383071 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383079 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6723bda-cb31-4951-b243-9a358b8e65f0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.383088 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6723bda-cb31-4951-b243-9a358b8e65f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.426338 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632269 4979 generic.go:334] "Generic (PLEG): container finished" podID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerID="6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" exitCode=0 Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632306 4979 generic.go:334] "Generic (PLEG): container finished" podID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerID="81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" exitCode=143 Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632334 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632335 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerDied","Data":"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370"} Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632394 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerDied","Data":"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9"} Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632417 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6723bda-cb31-4951-b243-9a358b8e65f0","Type":"ContainerDied","Data":"ea71b32de48e4a49d45264b0aa2f381b0548149ec8a0db018450e3df0c20f8ae"} Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.632435 4979 scope.go:117] "RemoveContainer" containerID="6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.703463 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.708616 4979 scope.go:117] "RemoveContainer" containerID="81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.715683 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.773967 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:23 crc kubenswrapper[4979]: E0130 23:12:23.774666 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-httpd" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.774689 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-httpd" Jan 30 23:12:23 crc kubenswrapper[4979]: E0130 23:12:23.774724 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-log" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.774730 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-log" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.774941 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-log" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.774955 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" containerName="glance-httpd" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.783226 4979 scope.go:117] "RemoveContainer" containerID="6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.784924 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.789485 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.795221 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 23:12:23 crc kubenswrapper[4979]: E0130 23:12:23.796395 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370\": container with ID starting with 6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370 not found: ID does not exist" containerID="6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.796457 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370"} err="failed to get container status \"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370\": rpc error: code = NotFound desc = could not find container \"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370\": container with ID starting with 6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370 not found: ID does not exist" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.796488 4979 scope.go:117] "RemoveContainer" containerID="81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" Jan 30 23:12:23 crc kubenswrapper[4979]: E0130 23:12:23.796961 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9\": container with ID starting with 81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9 not found: ID does not exist" containerID="81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.796994 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9"} err="failed to get container status \"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9\": rpc error: code = NotFound desc = could not find container \"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9\": container with ID starting with 81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9 not found: ID does not exist" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.797056 4979 scope.go:117] "RemoveContainer" containerID="6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.797389 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370"} err="failed to get container status \"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370\": rpc error: code = NotFound desc = could not find container \"6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370\": container with ID starting with 6407e2ce660e27c7f37934942dca944bb28eaac9e647c143c01985d918d27370 not found: ID does not exist" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.797409 4979 scope.go:117] "RemoveContainer" containerID="81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.797691 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9"} err="failed to get container status \"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9\": rpc error: code = NotFound desc = could not find container \"81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9\": container with ID starting with 81e9628778c1a06691d6ab71526c212603fc828cefe873bbc1542ec7afe2dda9 not found: ID does not exist" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.918730 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.918785 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.918815 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.919075 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.919120 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpd5\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.919453 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:23 crc kubenswrapper[4979]: I0130 23:12:23.919554 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021693 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021751 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021769 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021793 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021833 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021862 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.021901 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpd5\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.022398 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.023217 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.026616 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.028083 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.029776 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.029857 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.040360 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpd5\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5\") pod \"glance-default-external-api-0\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.118322 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.641729 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-log" containerID="cri-o://4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" gracePeriod=30 Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.641792 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-httpd" containerID="cri-o://8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" gracePeriod=30 Jan 30 23:12:24 crc kubenswrapper[4979]: I0130 23:12:24.767647 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:12:24 crc kubenswrapper[4979]: W0130 23:12:24.803096 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eceabd7_12d5_42b8_9add_f89801459249.slice/crio-2b4011d528c5f4fb422a09023b408acf162ec5c71b0df8f8e4325d433023d56a WatchSource:0}: Error finding container 2b4011d528c5f4fb422a09023b408acf162ec5c71b0df8f8e4325d433023d56a: Status 404 returned error can't find the container with id 2b4011d528c5f4fb422a09023b408acf162ec5c71b0df8f8e4325d433023d56a Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.090699 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6723bda-cb31-4951-b243-9a358b8e65f0" path="/var/lib/kubelet/pods/b6723bda-cb31-4951-b243-9a358b8e65f0/volumes" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.313727 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.444916 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445423 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445491 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkqs4\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445534 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445888 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445932 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.445959 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph\") pod \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\" (UID: \"d22d21d5-bd0b-4ad6-bd03-d024ff808850\") " Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.449092 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.449158 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs" (OuterVolumeSpecName: "logs") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.451006 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4" (OuterVolumeSpecName: "kube-api-access-tkqs4") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "kube-api-access-tkqs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.451632 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph" (OuterVolumeSpecName: "ceph") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.457612 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts" (OuterVolumeSpecName: "scripts") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.494235 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.520460 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data" (OuterVolumeSpecName: "config-data") pod "d22d21d5-bd0b-4ad6-bd03-d024ff808850" (UID: "d22d21d5-bd0b-4ad6-bd03-d024ff808850"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550422 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550467 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkqs4\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-kube-api-access-tkqs4\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550485 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550528 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550542 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d22d21d5-bd0b-4ad6-bd03-d024ff808850-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550553 4979 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d22d21d5-bd0b-4ad6-bd03-d024ff808850-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.550565 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22d21d5-bd0b-4ad6-bd03-d024ff808850-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654399 4979 generic.go:334] "Generic (PLEG): container finished" podID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerID="8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" exitCode=0 Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654442 4979 generic.go:334] "Generic (PLEG): container finished" podID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerID="4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" exitCode=143 Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654472 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654634 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerDied","Data":"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5"} Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654672 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerDied","Data":"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555"} Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654687 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d22d21d5-bd0b-4ad6-bd03-d024ff808850","Type":"ContainerDied","Data":"16f0bc2ad94e74fdba0d19d07447d9b4a1d5bd0d0089f365beb394d9a6ccc6e4"} Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.654741 4979 scope.go:117] "RemoveContainer" containerID="8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.658620 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerStarted","Data":"82374da9e2dd47fd345e23e5da5677353e686a5db8eb072ab0ddef15a716a6f6"} Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.658652 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerStarted","Data":"2b4011d528c5f4fb422a09023b408acf162ec5c71b0df8f8e4325d433023d56a"} Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.693385 4979 scope.go:117] "RemoveContainer" containerID="4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.701480 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.731211 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.744968 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:25 crc kubenswrapper[4979]: E0130 23:12:25.745424 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-log" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.745440 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-log" Jan 30 23:12:25 crc kubenswrapper[4979]: E0130 23:12:25.745461 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-httpd" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.745467 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-httpd" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.745626 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-log" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.745643 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" containerName="glance-httpd" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.746965 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.752564 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.754053 4979 scope.go:117] "RemoveContainer" containerID="8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.754281 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:25 crc kubenswrapper[4979]: E0130 23:12:25.757809 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5\": container with ID starting with 8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5 not found: ID does not exist" containerID="8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.757865 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5"} err="failed to get container status \"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5\": rpc error: code = NotFound desc = could not find container \"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5\": container with ID starting with 8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5 not found: ID does not exist" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.757891 4979 scope.go:117] "RemoveContainer" containerID="4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" Jan 30 23:12:25 crc kubenswrapper[4979]: E0130 23:12:25.758522 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555\": container with ID starting with 4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555 not found: ID does not exist" containerID="4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.758541 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555"} err="failed to get container status \"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555\": rpc error: code = NotFound desc = could not find container \"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555\": container with ID starting with 4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555 not found: ID does not exist" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.758556 4979 scope.go:117] "RemoveContainer" containerID="8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.758858 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5"} err="failed to get container status \"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5\": rpc error: code = NotFound desc = could not find container \"8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5\": container with ID starting with 8642fc586039b2f3c5444ea817838dd24845d42a4c64f80af730c64b816f7ec5 not found: ID does not exist" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.758871 4979 scope.go:117] "RemoveContainer" containerID="4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.759371 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555"} err="failed to get container status \"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555\": rpc error: code = NotFound desc = could not find container \"4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555\": container with ID starting with 4a405d4c0a0cc2f296907b48215e55ff9c5616e30c2d0593db665bc9f33ed555 not found: ID does not exist" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855729 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855840 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855872 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855901 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855932 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg4wj\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.855994 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.856068 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957593 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957677 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957719 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957766 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957783 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957800 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.957819 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg4wj\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.958788 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.958853 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.963216 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.963757 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.969128 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.969263 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:25 crc kubenswrapper[4979]: I0130 23:12:25.976379 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg4wj\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj\") pod \"glance-default-internal-api-0\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:12:26 crc kubenswrapper[4979]: I0130 23:12:26.061741 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:26 crc kubenswrapper[4979]: I0130 23:12:26.580026 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:12:26 crc kubenswrapper[4979]: I0130 23:12:26.687471 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerStarted","Data":"028416f98f1589f42948c16d708a30eb83a748a9cb41317ffc40f3e850e93529"} Jan 30 23:12:26 crc kubenswrapper[4979]: I0130 23:12:26.695264 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerStarted","Data":"55209b1e5d91700ba07d75b2118b6f30952e7e514892e758c421dc19261f2065"} Jan 30 23:12:26 crc kubenswrapper[4979]: I0130 23:12:26.739227 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.739203444 podStartE2EDuration="3.739203444s" podCreationTimestamp="2026-01-30 23:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:26.730606443 +0000 UTC m=+5542.691853486" watchObservedRunningTime="2026-01-30 23:12:26.739203444 +0000 UTC m=+5542.700450477" Jan 30 23:12:27 crc kubenswrapper[4979]: I0130 23:12:27.084951 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22d21d5-bd0b-4ad6-bd03-d024ff808850" path="/var/lib/kubelet/pods/d22d21d5-bd0b-4ad6-bd03-d024ff808850/volumes" Jan 30 23:12:27 crc kubenswrapper[4979]: I0130 23:12:27.704437 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerStarted","Data":"4b0ad042bd3a134b298fe28cc075f762b3f855ad7fe9585ff50024a2eb368232"} Jan 30 23:12:27 crc kubenswrapper[4979]: I0130 23:12:27.704754 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerStarted","Data":"b5896731cfc487f3a385246f60e958d9a149ef54dc521649352a734a8e89d545"} Jan 30 23:12:27 crc kubenswrapper[4979]: I0130 23:12:27.729436 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.729411552 podStartE2EDuration="2.729411552s" podCreationTimestamp="2026-01-30 23:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:27.722796145 +0000 UTC m=+5543.684043178" watchObservedRunningTime="2026-01-30 23:12:27.729411552 +0000 UTC m=+5543.690658585" Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.277978 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.354631 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.354890 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="dnsmasq-dns" containerID="cri-o://1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4" gracePeriod=10 Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.386739 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.40:5353: connect: connection refused" Jan 30 23:12:30 crc kubenswrapper[4979]: E0130 23:12:30.511152 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e29f1a4_dca0_42b8_8ee9_e040433dad76.slice/crio-conmon-1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e29f1a4_dca0_42b8_8ee9_e040433dad76.slice/crio-1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.750729 4979 generic.go:334] "Generic (PLEG): container finished" podID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerID="1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4" exitCode=0 Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.751104 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" event={"ID":"8e29f1a4-dca0-42b8-8ee9-e040433dad76","Type":"ContainerDied","Data":"1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4"} Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.863802 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.968718 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb\") pod \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.968773 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb\") pod \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.968826 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qx8f\" (UniqueName: \"kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f\") pod \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.969719 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config\") pod \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.969815 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc\") pod \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\" (UID: \"8e29f1a4-dca0-42b8-8ee9-e040433dad76\") " Jan 30 23:12:30 crc kubenswrapper[4979]: I0130 23:12:30.975121 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f" (OuterVolumeSpecName: "kube-api-access-8qx8f") pod "8e29f1a4-dca0-42b8-8ee9-e040433dad76" (UID: "8e29f1a4-dca0-42b8-8ee9-e040433dad76"). InnerVolumeSpecName "kube-api-access-8qx8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.017796 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e29f1a4-dca0-42b8-8ee9-e040433dad76" (UID: "8e29f1a4-dca0-42b8-8ee9-e040433dad76"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.027878 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e29f1a4-dca0-42b8-8ee9-e040433dad76" (UID: "8e29f1a4-dca0-42b8-8ee9-e040433dad76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.030385 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config" (OuterVolumeSpecName: "config") pod "8e29f1a4-dca0-42b8-8ee9-e040433dad76" (UID: "8e29f1a4-dca0-42b8-8ee9-e040433dad76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.032528 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e29f1a4-dca0-42b8-8ee9-e040433dad76" (UID: "8e29f1a4-dca0-42b8-8ee9-e040433dad76"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.072197 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.072427 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.072499 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.072557 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e29f1a4-dca0-42b8-8ee9-e040433dad76-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.072612 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qx8f\" (UniqueName: \"kubernetes.io/projected/8e29f1a4-dca0-42b8-8ee9-e040433dad76-kube-api-access-8qx8f\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.762021 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" event={"ID":"8e29f1a4-dca0-42b8-8ee9-e040433dad76","Type":"ContainerDied","Data":"4760175f4dd061a01c395f5191d4dc74af0c065922b917876039e3461e28ddb3"} Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.762324 4979 scope.go:117] "RemoveContainer" containerID="1c0d5146ceadb430708d4677b858a9685b404c47777c3b2822215bf316b3cfc4" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.762102 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6b95d565-xrrwt" Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.789227 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.799428 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6b95d565-xrrwt"] Jan 30 23:12:31 crc kubenswrapper[4979]: I0130 23:12:31.799985 4979 scope.go:117] "RemoveContainer" containerID="70e7a3e289c9bede605a4d28f895b056899de6dff342f7658a2ae4deec0c89ae" Jan 30 23:12:33 crc kubenswrapper[4979]: I0130 23:12:33.081363 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" path="/var/lib/kubelet/pods/8e29f1a4-dca0-42b8-8ee9-e040433dad76/volumes" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.119225 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.119345 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.165325 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.167293 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.789847 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 23:12:34 crc kubenswrapper[4979]: I0130 23:12:34.789926 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.064317 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.066297 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.100170 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.114015 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.743795 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.803010 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.803651 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.803705 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:36 crc kubenswrapper[4979]: I0130 23:12:36.850810 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 23:12:38 crc kubenswrapper[4979]: I0130 23:12:38.845385 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:38 crc kubenswrapper[4979]: I0130 23:12:38.845795 4979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 23:12:38 crc kubenswrapper[4979]: I0130 23:12:38.869486 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.818396 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6fj56"] Jan 30 23:12:44 crc kubenswrapper[4979]: E0130 23:12:44.819787 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="init" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.819805 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="init" Jan 30 23:12:44 crc kubenswrapper[4979]: E0130 23:12:44.819818 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="dnsmasq-dns" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.819824 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="dnsmasq-dns" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.821139 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e29f1a4-dca0-42b8-8ee9-e040433dad76" containerName="dnsmasq-dns" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.822098 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fj56" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.841301 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6fj56"] Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.922513 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5d73-account-create-update-kh7g2"] Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.923870 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.930043 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.941798 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d73-account-create-update-kh7g2"] Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.948720 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:44 crc kubenswrapper[4979]: I0130 23:12:44.948797 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2qf9\" (UniqueName: \"kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.050078 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2qf9\" (UniqueName: \"kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.050157 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.050463 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.050561 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbwws\" (UniqueName: \"kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.051577 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.079807 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2qf9\" (UniqueName: \"kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9\") pod \"placement-db-create-6fj56\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.142222 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fj56" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.152219 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbwws\" (UniqueName: \"kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.152332 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.155199 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.174574 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbwws\" (UniqueName: \"kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws\") pod \"placement-5d73-account-create-update-kh7g2\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.306385 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.612942 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6fj56"] Jan 30 23:12:45 crc kubenswrapper[4979]: W0130 23:12:45.618561 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26b37754_6d06_4d68_bf4b_34b553d5750e.slice/crio-c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043 WatchSource:0}: Error finding container c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043: Status 404 returned error can't find the container with id c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043 Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.791319 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d73-account-create-update-kh7g2"] Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.907141 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d73-account-create-update-kh7g2" event={"ID":"d7ef2a65-30bc-4af2-aa45-16b8b793359c","Type":"ContainerStarted","Data":"bf77eca2403d4cd7c920553e1d8e29a0e2f2640e4602e1da4935f3366a5776c9"} Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.910167 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fj56" event={"ID":"26b37754-6d06-4d68-bf4b-34b553d5750e","Type":"ContainerStarted","Data":"1ad4342510dcd831bcc75d1de4109d08c8cf80f260002f23328c1e9c71c6966a"} Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.910228 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fj56" event={"ID":"26b37754-6d06-4d68-bf4b-34b553d5750e","Type":"ContainerStarted","Data":"c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043"} Jan 30 23:12:45 crc kubenswrapper[4979]: I0130 23:12:45.933434 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-6fj56" podStartSLOduration=1.9333985550000001 podStartE2EDuration="1.933398555s" podCreationTimestamp="2026-01-30 23:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:45.924345812 +0000 UTC m=+5561.885592855" watchObservedRunningTime="2026-01-30 23:12:45.933398555 +0000 UTC m=+5561.894645588" Jan 30 23:12:46 crc kubenswrapper[4979]: I0130 23:12:46.920243 4979 generic.go:334] "Generic (PLEG): container finished" podID="26b37754-6d06-4d68-bf4b-34b553d5750e" containerID="1ad4342510dcd831bcc75d1de4109d08c8cf80f260002f23328c1e9c71c6966a" exitCode=0 Jan 30 23:12:46 crc kubenswrapper[4979]: I0130 23:12:46.920329 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fj56" event={"ID":"26b37754-6d06-4d68-bf4b-34b553d5750e","Type":"ContainerDied","Data":"1ad4342510dcd831bcc75d1de4109d08c8cf80f260002f23328c1e9c71c6966a"} Jan 30 23:12:46 crc kubenswrapper[4979]: I0130 23:12:46.922021 4979 generic.go:334] "Generic (PLEG): container finished" podID="d7ef2a65-30bc-4af2-aa45-16b8b793359c" containerID="e78c967f90d787e6a500755dd51462d00698c1a63f9294556b2308f1758c7a1f" exitCode=0 Jan 30 23:12:46 crc kubenswrapper[4979]: I0130 23:12:46.922146 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d73-account-create-update-kh7g2" event={"ID":"d7ef2a65-30bc-4af2-aa45-16b8b793359c","Type":"ContainerDied","Data":"e78c967f90d787e6a500755dd51462d00698c1a63f9294556b2308f1758c7a1f"} Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.361290 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.366891 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fj56" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.426978 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2qf9\" (UniqueName: \"kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9\") pod \"26b37754-6d06-4d68-bf4b-34b553d5750e\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.427150 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbwws\" (UniqueName: \"kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws\") pod \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.427238 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts\") pod \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\" (UID: \"d7ef2a65-30bc-4af2-aa45-16b8b793359c\") " Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.427302 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts\") pod \"26b37754-6d06-4d68-bf4b-34b553d5750e\" (UID: \"26b37754-6d06-4d68-bf4b-34b553d5750e\") " Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.428042 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7ef2a65-30bc-4af2-aa45-16b8b793359c" (UID: "d7ef2a65-30bc-4af2-aa45-16b8b793359c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.428302 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26b37754-6d06-4d68-bf4b-34b553d5750e" (UID: "26b37754-6d06-4d68-bf4b-34b553d5750e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.433968 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9" (OuterVolumeSpecName: "kube-api-access-l2qf9") pod "26b37754-6d06-4d68-bf4b-34b553d5750e" (UID: "26b37754-6d06-4d68-bf4b-34b553d5750e"). InnerVolumeSpecName "kube-api-access-l2qf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.441345 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws" (OuterVolumeSpecName: "kube-api-access-fbwws") pod "d7ef2a65-30bc-4af2-aa45-16b8b793359c" (UID: "d7ef2a65-30bc-4af2-aa45-16b8b793359c"). InnerVolumeSpecName "kube-api-access-fbwws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.529734 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2qf9\" (UniqueName: \"kubernetes.io/projected/26b37754-6d06-4d68-bf4b-34b553d5750e-kube-api-access-l2qf9\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.529768 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbwws\" (UniqueName: \"kubernetes.io/projected/d7ef2a65-30bc-4af2-aa45-16b8b793359c-kube-api-access-fbwws\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.529779 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7ef2a65-30bc-4af2-aa45-16b8b793359c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.529788 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b37754-6d06-4d68-bf4b-34b553d5750e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.942335 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d73-account-create-update-kh7g2" event={"ID":"d7ef2a65-30bc-4af2-aa45-16b8b793359c","Type":"ContainerDied","Data":"bf77eca2403d4cd7c920553e1d8e29a0e2f2640e4602e1da4935f3366a5776c9"} Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.942682 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf77eca2403d4cd7c920553e1d8e29a0e2f2640e4602e1da4935f3366a5776c9" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.942710 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d73-account-create-update-kh7g2" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.944352 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fj56" event={"ID":"26b37754-6d06-4d68-bf4b-34b553d5750e","Type":"ContainerDied","Data":"c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043"} Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.944373 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8fa3702668401a4acc2c865940bcb578514d9780df44d6d4deead608b0b8043" Jan 30 23:12:48 crc kubenswrapper[4979]: I0130 23:12:48.944406 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fj56" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.248460 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8cb96"] Jan 30 23:12:50 crc kubenswrapper[4979]: E0130 23:12:50.249145 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b37754-6d06-4d68-bf4b-34b553d5750e" containerName="mariadb-database-create" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.249159 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b37754-6d06-4d68-bf4b-34b553d5750e" containerName="mariadb-database-create" Jan 30 23:12:50 crc kubenswrapper[4979]: E0130 23:12:50.249184 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ef2a65-30bc-4af2-aa45-16b8b793359c" containerName="mariadb-account-create-update" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.249190 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ef2a65-30bc-4af2-aa45-16b8b793359c" containerName="mariadb-account-create-update" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.249359 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ef2a65-30bc-4af2-aa45-16b8b793359c" containerName="mariadb-account-create-update" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.249374 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b37754-6d06-4d68-bf4b-34b553d5750e" containerName="mariadb-database-create" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.249906 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.253011 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.253366 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5dm54" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.253581 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.277460 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8cb96"] Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.288861 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.290267 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.301806 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.371789 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.371850 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.371901 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.371938 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.371971 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.372283 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4d8\" (UniqueName: \"kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.372349 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.372630 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.372673 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dmf7\" (UniqueName: \"kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.372758 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475435 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475495 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4d8\" (UniqueName: \"kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475523 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475590 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475617 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dmf7\" (UniqueName: \"kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475656 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475741 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475770 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475825 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.475864 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.477111 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.477363 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.477814 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.477807 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.478200 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.483310 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.485185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.499628 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dmf7\" (UniqueName: \"kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7\") pod \"dnsmasq-dns-55fd9666d5-fmcqm\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.500914 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4d8\" (UniqueName: \"kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.502640 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle\") pod \"placement-db-sync-8cb96\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.577749 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.618382 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.905133 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8cb96"] Jan 30 23:12:50 crc kubenswrapper[4979]: I0130 23:12:50.997832 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8cb96" event={"ID":"b5b42fc3-64fe-40f2-9de5-b6f80489c601","Type":"ContainerStarted","Data":"8f0344395cf6379abf8e1a3eeb1fc6ed566ba11f831b0b6dc1f945364e9dd8b9"} Jan 30 23:12:51 crc kubenswrapper[4979]: I0130 23:12:51.212920 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:12:51 crc kubenswrapper[4979]: W0130 23:12:51.218869 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c13ddd7_ca9f_4446_a482_09cf5b71ced0.slice/crio-cd06f6a3729d6e12fae56c41ee58dc413dd41675985b230257d9e7d128d1839d WatchSource:0}: Error finding container cd06f6a3729d6e12fae56c41ee58dc413dd41675985b230257d9e7d128d1839d: Status 404 returned error can't find the container with id cd06f6a3729d6e12fae56c41ee58dc413dd41675985b230257d9e7d128d1839d Jan 30 23:12:52 crc kubenswrapper[4979]: I0130 23:12:52.009263 4979 generic.go:334] "Generic (PLEG): container finished" podID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerID="e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e" exitCode=0 Jan 30 23:12:52 crc kubenswrapper[4979]: I0130 23:12:52.009594 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" event={"ID":"3c13ddd7-ca9f-4446-a482-09cf5b71ced0","Type":"ContainerDied","Data":"e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e"} Jan 30 23:12:52 crc kubenswrapper[4979]: I0130 23:12:52.009627 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" event={"ID":"3c13ddd7-ca9f-4446-a482-09cf5b71ced0","Type":"ContainerStarted","Data":"cd06f6a3729d6e12fae56c41ee58dc413dd41675985b230257d9e7d128d1839d"} Jan 30 23:12:52 crc kubenswrapper[4979]: I0130 23:12:52.012920 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8cb96" event={"ID":"b5b42fc3-64fe-40f2-9de5-b6f80489c601","Type":"ContainerStarted","Data":"a62465cb392e615a1f73cdd50e7e273cdf6ffb4563f5d71cdc8e1d86d9a79520"} Jan 30 23:12:52 crc kubenswrapper[4979]: I0130 23:12:52.075181 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8cb96" podStartSLOduration=2.075110212 podStartE2EDuration="2.075110212s" podCreationTimestamp="2026-01-30 23:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:52.068728401 +0000 UTC m=+5568.029975474" watchObservedRunningTime="2026-01-30 23:12:52.075110212 +0000 UTC m=+5568.036357275" Jan 30 23:12:53 crc kubenswrapper[4979]: I0130 23:12:53.025818 4979 generic.go:334] "Generic (PLEG): container finished" podID="b5b42fc3-64fe-40f2-9de5-b6f80489c601" containerID="a62465cb392e615a1f73cdd50e7e273cdf6ffb4563f5d71cdc8e1d86d9a79520" exitCode=0 Jan 30 23:12:53 crc kubenswrapper[4979]: I0130 23:12:53.025921 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8cb96" event={"ID":"b5b42fc3-64fe-40f2-9de5-b6f80489c601","Type":"ContainerDied","Data":"a62465cb392e615a1f73cdd50e7e273cdf6ffb4563f5d71cdc8e1d86d9a79520"} Jan 30 23:12:53 crc kubenswrapper[4979]: I0130 23:12:53.030947 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" event={"ID":"3c13ddd7-ca9f-4446-a482-09cf5b71ced0","Type":"ContainerStarted","Data":"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed"} Jan 30 23:12:53 crc kubenswrapper[4979]: I0130 23:12:53.032225 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:12:53 crc kubenswrapper[4979]: I0130 23:12:53.100919 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" podStartSLOduration=3.100877086 podStartE2EDuration="3.100877086s" podCreationTimestamp="2026-01-30 23:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:53.08052055 +0000 UTC m=+5569.041767633" watchObservedRunningTime="2026-01-30 23:12:53.100877086 +0000 UTC m=+5569.062124179" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.345584 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458226 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs\") pod \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458401 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data\") pod \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458441 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts\") pod \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458499 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs" (OuterVolumeSpecName: "logs") pod "b5b42fc3-64fe-40f2-9de5-b6f80489c601" (UID: "b5b42fc3-64fe-40f2-9de5-b6f80489c601"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458520 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4d8\" (UniqueName: \"kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8\") pod \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.458553 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle\") pod \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\" (UID: \"b5b42fc3-64fe-40f2-9de5-b6f80489c601\") " Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.459083 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b42fc3-64fe-40f2-9de5-b6f80489c601-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.463991 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8" (OuterVolumeSpecName: "kube-api-access-tx4d8") pod "b5b42fc3-64fe-40f2-9de5-b6f80489c601" (UID: "b5b42fc3-64fe-40f2-9de5-b6f80489c601"). InnerVolumeSpecName "kube-api-access-tx4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.464278 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts" (OuterVolumeSpecName: "scripts") pod "b5b42fc3-64fe-40f2-9de5-b6f80489c601" (UID: "b5b42fc3-64fe-40f2-9de5-b6f80489c601"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.485870 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5b42fc3-64fe-40f2-9de5-b6f80489c601" (UID: "b5b42fc3-64fe-40f2-9de5-b6f80489c601"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.487133 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data" (OuterVolumeSpecName: "config-data") pod "b5b42fc3-64fe-40f2-9de5-b6f80489c601" (UID: "b5b42fc3-64fe-40f2-9de5-b6f80489c601"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.560377 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.560410 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx4d8\" (UniqueName: \"kubernetes.io/projected/b5b42fc3-64fe-40f2-9de5-b6f80489c601-kube-api-access-tx4d8\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.560421 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:54 crc kubenswrapper[4979]: I0130 23:12:54.560429 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b42fc3-64fe-40f2-9de5-b6f80489c601-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.051909 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8cb96" event={"ID":"b5b42fc3-64fe-40f2-9de5-b6f80489c601","Type":"ContainerDied","Data":"8f0344395cf6379abf8e1a3eeb1fc6ed566ba11f831b0b6dc1f945364e9dd8b9"} Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.051978 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f0344395cf6379abf8e1a3eeb1fc6ed566ba11f831b0b6dc1f945364e9dd8b9" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.051935 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8cb96" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.531397 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8565876748-g76rq"] Jan 30 23:12:55 crc kubenswrapper[4979]: E0130 23:12:55.532687 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b42fc3-64fe-40f2-9de5-b6f80489c601" containerName="placement-db-sync" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.532713 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b42fc3-64fe-40f2-9de5-b6f80489c601" containerName="placement-db-sync" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.533113 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b42fc3-64fe-40f2-9de5-b6f80489c601" containerName="placement-db-sync" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.535199 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.538394 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5dm54" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.541735 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.545220 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.586261 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7j4s\" (UniqueName: \"kubernetes.io/projected/019fe9ef-3972-45a8-82ec-8b566d9a1c58-kube-api-access-l7j4s\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.586326 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-combined-ca-bundle\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.586361 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019fe9ef-3972-45a8-82ec-8b566d9a1c58-logs\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.586539 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-config-data\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.587069 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-scripts\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.617144 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8565876748-g76rq"] Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.689020 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-scripts\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.689176 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7j4s\" (UniqueName: \"kubernetes.io/projected/019fe9ef-3972-45a8-82ec-8b566d9a1c58-kube-api-access-l7j4s\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.689227 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-combined-ca-bundle\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.689289 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019fe9ef-3972-45a8-82ec-8b566d9a1c58-logs\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.689351 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-config-data\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.690090 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019fe9ef-3972-45a8-82ec-8b566d9a1c58-logs\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.693958 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-scripts\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.694416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-combined-ca-bundle\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.705301 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019fe9ef-3972-45a8-82ec-8b566d9a1c58-config-data\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.718882 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7j4s\" (UniqueName: \"kubernetes.io/projected/019fe9ef-3972-45a8-82ec-8b566d9a1c58-kube-api-access-l7j4s\") pod \"placement-8565876748-g76rq\" (UID: \"019fe9ef-3972-45a8-82ec-8b566d9a1c58\") " pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:55 crc kubenswrapper[4979]: I0130 23:12:55.867459 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:56 crc kubenswrapper[4979]: W0130 23:12:56.328263 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod019fe9ef_3972_45a8_82ec_8b566d9a1c58.slice/crio-2c34f88f9a581d5658628422be5c77163a9ad5138af6fe262b37254d25ce9983 WatchSource:0}: Error finding container 2c34f88f9a581d5658628422be5c77163a9ad5138af6fe262b37254d25ce9983: Status 404 returned error can't find the container with id 2c34f88f9a581d5658628422be5c77163a9ad5138af6fe262b37254d25ce9983 Jan 30 23:12:56 crc kubenswrapper[4979]: I0130 23:12:56.330841 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8565876748-g76rq"] Jan 30 23:12:57 crc kubenswrapper[4979]: I0130 23:12:57.082310 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8565876748-g76rq" event={"ID":"019fe9ef-3972-45a8-82ec-8b566d9a1c58","Type":"ContainerStarted","Data":"a78bb4f6e4d18bbe605a604da375b4ad1adbf8aec4a8179462386250a50af04b"} Jan 30 23:12:57 crc kubenswrapper[4979]: I0130 23:12:57.082788 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8565876748-g76rq" event={"ID":"019fe9ef-3972-45a8-82ec-8b566d9a1c58","Type":"ContainerStarted","Data":"4dee4b2c16d1f46ffe4dc9430bee4be1a0344706c6ab2fe5a75ac0dec5f19b76"} Jan 30 23:12:57 crc kubenswrapper[4979]: I0130 23:12:57.082805 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8565876748-g76rq" Jan 30 23:12:57 crc kubenswrapper[4979]: I0130 23:12:57.082815 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8565876748-g76rq" event={"ID":"019fe9ef-3972-45a8-82ec-8b566d9a1c58","Type":"ContainerStarted","Data":"2c34f88f9a581d5658628422be5c77163a9ad5138af6fe262b37254d25ce9983"} Jan 30 23:12:57 crc kubenswrapper[4979]: I0130 23:12:57.094426 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8565876748-g76rq" podStartSLOduration=2.094411097 podStartE2EDuration="2.094411097s" podCreationTimestamp="2026-01-30 23:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:12:57.090286086 +0000 UTC m=+5573.051533119" watchObservedRunningTime="2026-01-30 23:12:57.094411097 +0000 UTC m=+5573.055658130" Jan 30 23:12:58 crc kubenswrapper[4979]: I0130 23:12:58.077423 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8565876748-g76rq" Jan 30 23:13:00 crc kubenswrapper[4979]: I0130 23:13:00.620859 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:13:00 crc kubenswrapper[4979]: I0130 23:13:00.698551 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:13:00 crc kubenswrapper[4979]: I0130 23:13:00.698813 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="dnsmasq-dns" containerID="cri-o://949025542d878f0aec57178ae4767449919585cb47ec404495f570b3fe0d8899" gracePeriod=10 Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.112459 4979 generic.go:334] "Generic (PLEG): container finished" podID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerID="949025542d878f0aec57178ae4767449919585cb47ec404495f570b3fe0d8899" exitCode=0 Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.113185 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" event={"ID":"2ab8f580-9bea-44a9-a732-ef54cf9eef47","Type":"ContainerDied","Data":"949025542d878f0aec57178ae4767449919585cb47ec404495f570b3fe0d8899"} Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.113327 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" event={"ID":"2ab8f580-9bea-44a9-a732-ef54cf9eef47","Type":"ContainerDied","Data":"e0a67ae163b7249d5be022ae19e45164ea2e40a675fe276f1793defd9586ab15"} Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.113422 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a67ae163b7249d5be022ae19e45164ea2e40a675fe276f1793defd9586ab15" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.172772 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.310177 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc\") pod \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.310291 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg5lj\" (UniqueName: \"kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj\") pod \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.310357 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config\") pod \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.310986 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb\") pod \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.311126 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb\") pod \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\" (UID: \"2ab8f580-9bea-44a9-a732-ef54cf9eef47\") " Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.316696 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj" (OuterVolumeSpecName: "kube-api-access-bg5lj") pod "2ab8f580-9bea-44a9-a732-ef54cf9eef47" (UID: "2ab8f580-9bea-44a9-a732-ef54cf9eef47"). InnerVolumeSpecName "kube-api-access-bg5lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.354870 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ab8f580-9bea-44a9-a732-ef54cf9eef47" (UID: "2ab8f580-9bea-44a9-a732-ef54cf9eef47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.357414 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ab8f580-9bea-44a9-a732-ef54cf9eef47" (UID: "2ab8f580-9bea-44a9-a732-ef54cf9eef47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.360746 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ab8f580-9bea-44a9-a732-ef54cf9eef47" (UID: "2ab8f580-9bea-44a9-a732-ef54cf9eef47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.370676 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config" (OuterVolumeSpecName: "config") pod "2ab8f580-9bea-44a9-a732-ef54cf9eef47" (UID: "2ab8f580-9bea-44a9-a732-ef54cf9eef47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.413598 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.413634 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.413644 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg5lj\" (UniqueName: \"kubernetes.io/projected/2ab8f580-9bea-44a9-a732-ef54cf9eef47-kube-api-access-bg5lj\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.413656 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:01 crc kubenswrapper[4979]: I0130 23:13:01.413667 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ab8f580-9bea-44a9-a732-ef54cf9eef47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:02 crc kubenswrapper[4979]: I0130 23:13:02.122265 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d69cf7c75-5qv59" Jan 30 23:13:02 crc kubenswrapper[4979]: I0130 23:13:02.177649 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:13:02 crc kubenswrapper[4979]: I0130 23:13:02.191315 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d69cf7c75-5qv59"] Jan 30 23:13:03 crc kubenswrapper[4979]: I0130 23:13:03.083282 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" path="/var/lib/kubelet/pods/2ab8f580-9bea-44a9-a732-ef54cf9eef47/volumes" Jan 30 23:13:26 crc kubenswrapper[4979]: I0130 23:13:26.769881 4979 scope.go:117] "RemoveContainer" containerID="dad83fe6e0dd13f90e65510d87c2454c3b37aa1abc0bae6f460d76fcaed45b7c" Jan 30 23:13:26 crc kubenswrapper[4979]: I0130 23:13:26.834667 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8565876748-g76rq" Jan 30 23:13:26 crc kubenswrapper[4979]: I0130 23:13:26.834982 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8565876748-g76rq" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.892838 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-cxszb"] Jan 30 23:13:50 crc kubenswrapper[4979]: E0130 23:13:50.893566 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="init" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.893578 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="init" Jan 30 23:13:50 crc kubenswrapper[4979]: E0130 23:13:50.893603 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="dnsmasq-dns" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.893609 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="dnsmasq-dns" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.893743 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab8f580-9bea-44a9-a732-ef54cf9eef47" containerName="dnsmasq-dns" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.894283 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.905164 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cxszb"] Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.949638 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhzv2\" (UniqueName: \"kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.949688 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.993422 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-nj2pr"] Jan 30 23:13:50 crc kubenswrapper[4979]: I0130 23:13:50.996655 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.004507 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nj2pr"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.051999 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfd88\" (UniqueName: \"kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.052127 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.052171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhzv2\" (UniqueName: \"kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.052199 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.052939 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.075669 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhzv2\" (UniqueName: \"kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2\") pod \"nova-api-db-create-cxszb\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.102984 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-eff7-account-create-update-zbvkl"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.106789 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.110594 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.115285 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-eff7-account-create-update-zbvkl"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.153295 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfd88\" (UniqueName: \"kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.153437 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.154201 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.179928 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfd88\" (UniqueName: \"kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88\") pod \"nova-cell0-db-create-nj2pr\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.198857 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fdfp6"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.199952 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.209338 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.216547 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fdfp6"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.255913 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.255981 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.256063 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtjl\" (UniqueName: \"kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.256131 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b895r\" (UniqueName: \"kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.317570 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.322608 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0ab0-account-create-update-wcw27"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.323667 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.325846 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.357777 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.357828 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.357875 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djtjl\" (UniqueName: \"kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.357925 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b895r\" (UniqueName: \"kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.358914 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.366760 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0ab0-account-create-update-wcw27"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.373754 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.421699 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtjl\" (UniqueName: \"kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl\") pod \"nova-cell1-db-create-fdfp6\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.422186 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b895r\" (UniqueName: \"kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r\") pod \"nova-api-eff7-account-create-update-zbvkl\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.437733 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.466993 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xlrt\" (UniqueName: \"kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.467109 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.544065 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.544790 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9d4a-account-create-update-t4fvj"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.546784 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.556801 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.564931 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9d4a-account-create-update-t4fvj"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.626252 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlrt\" (UniqueName: \"kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.626947 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.628281 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.654736 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xlrt\" (UniqueName: \"kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt\") pod \"nova-cell0-0ab0-account-create-update-wcw27\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.666301 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cxszb"] Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.706311 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.729107 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.729233 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxlnx\" (UniqueName: \"kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.830293 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxlnx\" (UniqueName: \"kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.830680 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.831476 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.848922 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxlnx\" (UniqueName: \"kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx\") pod \"nova-cell1-9d4a-account-create-update-t4fvj\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.924925 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:51 crc kubenswrapper[4979]: I0130 23:13:51.928910 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nj2pr"] Jan 30 23:13:51 crc kubenswrapper[4979]: W0130 23:13:51.951357 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09fb7fe9_97f7_4af9_897c_e4fb6f234c79.slice/crio-f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca WatchSource:0}: Error finding container f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca: Status 404 returned error can't find the container with id f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.054186 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-eff7-account-create-update-zbvkl"] Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.121102 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fdfp6"] Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.185621 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9d4a-account-create-update-t4fvj"] Jan 30 23:13:52 crc kubenswrapper[4979]: W0130 23:13:52.196289 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0010c53f_b0a4_44bd_9178_bbd2941973ff.slice/crio-5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642 WatchSource:0}: Error finding container 5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642: Status 404 returned error can't find the container with id 5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642 Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.199167 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0ab0-account-create-update-wcw27"] Jan 30 23:13:52 crc kubenswrapper[4979]: W0130 23:13:52.208372 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20a160f3_ed61_481d_be84_cdc6c7b6097a.slice/crio-d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc WatchSource:0}: Error finding container d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc: Status 404 returned error can't find the container with id d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.615322 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eff7-account-create-update-zbvkl" event={"ID":"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd","Type":"ContainerStarted","Data":"d6036102c9a9e4c432b5f565faa7d7dd06e4a0ac83ea3d325a705b0f27afa0af"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.615368 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eff7-account-create-update-zbvkl" event={"ID":"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd","Type":"ContainerStarted","Data":"9e8472bdcdaadebac64e1b2f96d0af2b368fb0879ca9fc61ccf0b4fcdb472597"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.618235 4979 generic.go:334] "Generic (PLEG): container finished" podID="09fb7fe9-97f7-4af9-897c-e4fb6f234c79" containerID="27746524c4c68ca5b766ef144aa2b7cd8bd00780eefec84e45e51a6c155cf253" exitCode=0 Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.618331 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nj2pr" event={"ID":"09fb7fe9-97f7-4af9-897c-e4fb6f234c79","Type":"ContainerDied","Data":"27746524c4c68ca5b766ef144aa2b7cd8bd00780eefec84e45e51a6c155cf253"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.618383 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nj2pr" event={"ID":"09fb7fe9-97f7-4af9-897c-e4fb6f234c79","Type":"ContainerStarted","Data":"f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.624550 4979 generic.go:334] "Generic (PLEG): container finished" podID="8f726869-e2f9-4a3b-b40a-236ad3a8566c" containerID="3d49f76579ebce159dde4f7f8e10b1d7dd782ed39ac26b0b2a652ca85113974a" exitCode=0 Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.624744 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cxszb" event={"ID":"8f726869-e2f9-4a3b-b40a-236ad3a8566c","Type":"ContainerDied","Data":"3d49f76579ebce159dde4f7f8e10b1d7dd782ed39ac26b0b2a652ca85113974a"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.624784 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cxszb" event={"ID":"8f726869-e2f9-4a3b-b40a-236ad3a8566c","Type":"ContainerStarted","Data":"ff223dea7923b35bbf27adc16d58465c7a8c2018c20ec345e5f1131825c02905"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.626425 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" event={"ID":"0010c53f-b0a4-44bd-9178-bbd2941973ff","Type":"ContainerStarted","Data":"dcc8eb2dc0a607435ecf93ba244414771c7370f6f382f6c64913f281ce050673"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.626479 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" event={"ID":"0010c53f-b0a4-44bd-9178-bbd2941973ff","Type":"ContainerStarted","Data":"5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.633591 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdfp6" event={"ID":"28312ce4-d376-4d84-9aea-175ee095e2ce","Type":"ContainerStarted","Data":"27840084beb4ba874ff13079199d29959179ee34197c63b9cb25f8f1f6190475"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.633934 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdfp6" event={"ID":"28312ce4-d376-4d84-9aea-175ee095e2ce","Type":"ContainerStarted","Data":"e7b7596bba991105ac1cdca95c1c01dcf5161ff0fa25b7bd1a44652e91891d68"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.634221 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-eff7-account-create-update-zbvkl" podStartSLOduration=1.6342102490000001 podStartE2EDuration="1.634210249s" podCreationTimestamp="2026-01-30 23:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:13:52.633261184 +0000 UTC m=+5628.594508227" watchObservedRunningTime="2026-01-30 23:13:52.634210249 +0000 UTC m=+5628.595457282" Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.636211 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" event={"ID":"20a160f3-ed61-481d-be84-cdc6c7b6097a","Type":"ContainerStarted","Data":"95d8644ba79bb1a7acb56c2741c41279f8988b80e0d6356e0c3aa672c820a8cd"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.636258 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" event={"ID":"20a160f3-ed61-481d-be84-cdc6c7b6097a","Type":"ContainerStarted","Data":"d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc"} Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.677487 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" podStartSLOduration=1.677466023 podStartE2EDuration="1.677466023s" podCreationTimestamp="2026-01-30 23:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:13:52.664853803 +0000 UTC m=+5628.626100846" watchObservedRunningTime="2026-01-30 23:13:52.677466023 +0000 UTC m=+5628.638713056" Jan 30 23:13:52 crc kubenswrapper[4979]: I0130 23:13:52.699519 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" podStartSLOduration=1.699497315 podStartE2EDuration="1.699497315s" podCreationTimestamp="2026-01-30 23:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:13:52.690804392 +0000 UTC m=+5628.652051435" watchObservedRunningTime="2026-01-30 23:13:52.699497315 +0000 UTC m=+5628.660744348" Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.653729 4979 generic.go:334] "Generic (PLEG): container finished" podID="28312ce4-d376-4d84-9aea-175ee095e2ce" containerID="27840084beb4ba874ff13079199d29959179ee34197c63b9cb25f8f1f6190475" exitCode=0 Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.653788 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdfp6" event={"ID":"28312ce4-d376-4d84-9aea-175ee095e2ce","Type":"ContainerDied","Data":"27840084beb4ba874ff13079199d29959179ee34197c63b9cb25f8f1f6190475"} Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.655970 4979 generic.go:334] "Generic (PLEG): container finished" podID="20a160f3-ed61-481d-be84-cdc6c7b6097a" containerID="95d8644ba79bb1a7acb56c2741c41279f8988b80e0d6356e0c3aa672c820a8cd" exitCode=0 Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.656010 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" event={"ID":"20a160f3-ed61-481d-be84-cdc6c7b6097a","Type":"ContainerDied","Data":"95d8644ba79bb1a7acb56c2741c41279f8988b80e0d6356e0c3aa672c820a8cd"} Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.658509 4979 generic.go:334] "Generic (PLEG): container finished" podID="c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" containerID="d6036102c9a9e4c432b5f565faa7d7dd06e4a0ac83ea3d325a705b0f27afa0af" exitCode=0 Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.658569 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eff7-account-create-update-zbvkl" event={"ID":"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd","Type":"ContainerDied","Data":"d6036102c9a9e4c432b5f565faa7d7dd06e4a0ac83ea3d325a705b0f27afa0af"} Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.659905 4979 generic.go:334] "Generic (PLEG): container finished" podID="0010c53f-b0a4-44bd-9178-bbd2941973ff" containerID="dcc8eb2dc0a607435ecf93ba244414771c7370f6f382f6c64913f281ce050673" exitCode=0 Jan 30 23:13:53 crc kubenswrapper[4979]: I0130 23:13:53.659952 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" event={"ID":"0010c53f-b0a4-44bd-9178-bbd2941973ff","Type":"ContainerDied","Data":"dcc8eb2dc0a607435ecf93ba244414771c7370f6f382f6c64913f281ce050673"} Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.034624 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.077621 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djtjl\" (UniqueName: \"kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl\") pod \"28312ce4-d376-4d84-9aea-175ee095e2ce\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.081094 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts\") pod \"28312ce4-d376-4d84-9aea-175ee095e2ce\" (UID: \"28312ce4-d376-4d84-9aea-175ee095e2ce\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.082318 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28312ce4-d376-4d84-9aea-175ee095e2ce" (UID: "28312ce4-d376-4d84-9aea-175ee095e2ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.087069 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl" (OuterVolumeSpecName: "kube-api-access-djtjl") pod "28312ce4-d376-4d84-9aea-175ee095e2ce" (UID: "28312ce4-d376-4d84-9aea-175ee095e2ce"). InnerVolumeSpecName "kube-api-access-djtjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.133322 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.138864 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182259 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhzv2\" (UniqueName: \"kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2\") pod \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182372 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts\") pod \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182441 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts\") pod \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\" (UID: \"8f726869-e2f9-4a3b-b40a-236ad3a8566c\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182465 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfd88\" (UniqueName: \"kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88\") pod \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\" (UID: \"09fb7fe9-97f7-4af9-897c-e4fb6f234c79\") " Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182902 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09fb7fe9-97f7-4af9-897c-e4fb6f234c79" (UID: "09fb7fe9-97f7-4af9-897c-e4fb6f234c79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.182982 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f726869-e2f9-4a3b-b40a-236ad3a8566c" (UID: "8f726869-e2f9-4a3b-b40a-236ad3a8566c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.185392 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2" (OuterVolumeSpecName: "kube-api-access-hhzv2") pod "8f726869-e2f9-4a3b-b40a-236ad3a8566c" (UID: "8f726869-e2f9-4a3b-b40a-236ad3a8566c"). InnerVolumeSpecName "kube-api-access-hhzv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.184798 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.185525 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28312ce4-d376-4d84-9aea-175ee095e2ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.185537 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f726869-e2f9-4a3b-b40a-236ad3a8566c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.185553 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djtjl\" (UniqueName: \"kubernetes.io/projected/28312ce4-d376-4d84-9aea-175ee095e2ce-kube-api-access-djtjl\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.187448 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88" (OuterVolumeSpecName: "kube-api-access-gfd88") pod "09fb7fe9-97f7-4af9-897c-e4fb6f234c79" (UID: "09fb7fe9-97f7-4af9-897c-e4fb6f234c79"). InnerVolumeSpecName "kube-api-access-gfd88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.295490 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfd88\" (UniqueName: \"kubernetes.io/projected/09fb7fe9-97f7-4af9-897c-e4fb6f234c79-kube-api-access-gfd88\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.295541 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhzv2\" (UniqueName: \"kubernetes.io/projected/8f726869-e2f9-4a3b-b40a-236ad3a8566c-kube-api-access-hhzv2\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.673622 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cxszb" event={"ID":"8f726869-e2f9-4a3b-b40a-236ad3a8566c","Type":"ContainerDied","Data":"ff223dea7923b35bbf27adc16d58465c7a8c2018c20ec345e5f1131825c02905"} Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.673673 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff223dea7923b35bbf27adc16d58465c7a8c2018c20ec345e5f1131825c02905" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.673636 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cxszb" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.676828 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdfp6" event={"ID":"28312ce4-d376-4d84-9aea-175ee095e2ce","Type":"ContainerDied","Data":"e7b7596bba991105ac1cdca95c1c01dcf5161ff0fa25b7bd1a44652e91891d68"} Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.676871 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b7596bba991105ac1cdca95c1c01dcf5161ff0fa25b7bd1a44652e91891d68" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.676874 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdfp6" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.680181 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nj2pr" event={"ID":"09fb7fe9-97f7-4af9-897c-e4fb6f234c79","Type":"ContainerDied","Data":"f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca"} Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.680637 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9417426a923210025b82baf1dd410d81360f20e1e46e5df895d3ec1b50fd6ca" Jan 30 23:13:54 crc kubenswrapper[4979]: I0130 23:13:54.680424 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nj2pr" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.150641 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.166018 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.191986 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.315740 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts\") pod \"20a160f3-ed61-481d-be84-cdc6c7b6097a\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.315880 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b895r\" (UniqueName: \"kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r\") pod \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.315917 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts\") pod \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\" (UID: \"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.315964 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts\") pod \"0010c53f-b0a4-44bd-9178-bbd2941973ff\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.316001 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xlrt\" (UniqueName: \"kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt\") pod \"20a160f3-ed61-481d-be84-cdc6c7b6097a\" (UID: \"20a160f3-ed61-481d-be84-cdc6c7b6097a\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.316108 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxlnx\" (UniqueName: \"kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx\") pod \"0010c53f-b0a4-44bd-9178-bbd2941973ff\" (UID: \"0010c53f-b0a4-44bd-9178-bbd2941973ff\") " Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.316641 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0010c53f-b0a4-44bd-9178-bbd2941973ff" (UID: "0010c53f-b0a4-44bd-9178-bbd2941973ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.316735 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" (UID: "c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.316931 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20a160f3-ed61-481d-be84-cdc6c7b6097a" (UID: "20a160f3-ed61-481d-be84-cdc6c7b6097a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.320654 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r" (OuterVolumeSpecName: "kube-api-access-b895r") pod "c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" (UID: "c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd"). InnerVolumeSpecName "kube-api-access-b895r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.320729 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt" (OuterVolumeSpecName: "kube-api-access-8xlrt") pod "20a160f3-ed61-481d-be84-cdc6c7b6097a" (UID: "20a160f3-ed61-481d-be84-cdc6c7b6097a"). InnerVolumeSpecName "kube-api-access-8xlrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.321246 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx" (OuterVolumeSpecName: "kube-api-access-fxlnx") pod "0010c53f-b0a4-44bd-9178-bbd2941973ff" (UID: "0010c53f-b0a4-44bd-9178-bbd2941973ff"). InnerVolumeSpecName "kube-api-access-fxlnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418142 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20a160f3-ed61-481d-be84-cdc6c7b6097a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418428 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b895r\" (UniqueName: \"kubernetes.io/projected/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-kube-api-access-b895r\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418509 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418587 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0010c53f-b0a4-44bd-9178-bbd2941973ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418692 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xlrt\" (UniqueName: \"kubernetes.io/projected/20a160f3-ed61-481d-be84-cdc6c7b6097a-kube-api-access-8xlrt\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.418789 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxlnx\" (UniqueName: \"kubernetes.io/projected/0010c53f-b0a4-44bd-9178-bbd2941973ff-kube-api-access-fxlnx\") on node \"crc\" DevicePath \"\"" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.695411 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" event={"ID":"20a160f3-ed61-481d-be84-cdc6c7b6097a","Type":"ContainerDied","Data":"d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc"} Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.695459 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c5ca1e39bf988d9f7ac29012f96b4c25c116b8db1b2aeb09747e0c59c590cc" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.695576 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ab0-account-create-update-wcw27" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.701434 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eff7-account-create-update-zbvkl" event={"ID":"c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd","Type":"ContainerDied","Data":"9e8472bdcdaadebac64e1b2f96d0af2b368fb0879ca9fc61ccf0b4fcdb472597"} Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.701542 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e8472bdcdaadebac64e1b2f96d0af2b368fb0879ca9fc61ccf0b4fcdb472597" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.701444 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eff7-account-create-update-zbvkl" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.709825 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" event={"ID":"0010c53f-b0a4-44bd-9178-bbd2941973ff","Type":"ContainerDied","Data":"5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642"} Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.709898 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3c396e3d760ce9651d06e6b6aa9f255f6fbbd29e20b06941ef288b89076642" Jan 30 23:13:55 crc kubenswrapper[4979]: I0130 23:13:55.709901 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4a-account-create-update-t4fvj" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.646897 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbzxc"] Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647270 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f726869-e2f9-4a3b-b40a-236ad3a8566c" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647287 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f726869-e2f9-4a3b-b40a-236ad3a8566c" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647298 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647307 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647319 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a160f3-ed61-481d-be84-cdc6c7b6097a" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647326 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a160f3-ed61-481d-be84-cdc6c7b6097a" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647342 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0010c53f-b0a4-44bd-9178-bbd2941973ff" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647351 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0010c53f-b0a4-44bd-9178-bbd2941973ff" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647380 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28312ce4-d376-4d84-9aea-175ee095e2ce" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647387 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="28312ce4-d376-4d84-9aea-175ee095e2ce" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: E0130 23:13:56.647403 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fb7fe9-97f7-4af9-897c-e4fb6f234c79" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647410 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fb7fe9-97f7-4af9-897c-e4fb6f234c79" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647559 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a160f3-ed61-481d-be84-cdc6c7b6097a" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647568 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="28312ce4-d376-4d84-9aea-175ee095e2ce" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647578 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="09fb7fe9-97f7-4af9-897c-e4fb6f234c79" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647588 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647602 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0010c53f-b0a4-44bd-9178-bbd2941973ff" containerName="mariadb-account-create-update" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.647612 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f726869-e2f9-4a3b-b40a-236ad3a8566c" containerName="mariadb-database-create" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.648455 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.655472 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5d7d2" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.655722 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.655916 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.668172 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbzxc"] Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.742348 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.742395 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.742536 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ntxf\" (UniqueName: \"kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.742644 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.844019 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.844098 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.844187 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ntxf\" (UniqueName: \"kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.844243 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.847703 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.847703 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.848226 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.883460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ntxf\" (UniqueName: \"kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf\") pod \"nova-cell0-conductor-db-sync-zbzxc\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:56 crc kubenswrapper[4979]: I0130 23:13:56.969730 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:13:57 crc kubenswrapper[4979]: I0130 23:13:57.482669 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbzxc"] Jan 30 23:13:57 crc kubenswrapper[4979]: I0130 23:13:57.731835 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" event={"ID":"498ed84d-af03-4ccb-bc46-3d1f8ca8861a","Type":"ContainerStarted","Data":"c97facf775c73b551ef6f9048bed47738d4278893d70fd1c9740e75be9b3292e"} Jan 30 23:13:57 crc kubenswrapper[4979]: I0130 23:13:57.731897 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" event={"ID":"498ed84d-af03-4ccb-bc46-3d1f8ca8861a","Type":"ContainerStarted","Data":"11dcf61649e55b780fcd94737fb264550a32846241b86a9cd7be47a7c418a6f6"} Jan 30 23:13:57 crc kubenswrapper[4979]: I0130 23:13:57.761069 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" podStartSLOduration=1.760995624 podStartE2EDuration="1.760995624s" podCreationTimestamp="2026-01-30 23:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:13:57.760724267 +0000 UTC m=+5633.721971300" watchObservedRunningTime="2026-01-30 23:13:57.760995624 +0000 UTC m=+5633.722242697" Jan 30 23:14:02 crc kubenswrapper[4979]: I0130 23:14:02.040228 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:14:02 crc kubenswrapper[4979]: I0130 23:14:02.040741 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:14:02 crc kubenswrapper[4979]: I0130 23:14:02.805245 4979 generic.go:334] "Generic (PLEG): container finished" podID="498ed84d-af03-4ccb-bc46-3d1f8ca8861a" containerID="c97facf775c73b551ef6f9048bed47738d4278893d70fd1c9740e75be9b3292e" exitCode=0 Jan 30 23:14:02 crc kubenswrapper[4979]: I0130 23:14:02.805369 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" event={"ID":"498ed84d-af03-4ccb-bc46-3d1f8ca8861a","Type":"ContainerDied","Data":"c97facf775c73b551ef6f9048bed47738d4278893d70fd1c9740e75be9b3292e"} Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.220144 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.284540 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data\") pod \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.284771 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ntxf\" (UniqueName: \"kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf\") pod \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.284972 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts\") pod \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.285125 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle\") pod \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\" (UID: \"498ed84d-af03-4ccb-bc46-3d1f8ca8861a\") " Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.293662 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf" (OuterVolumeSpecName: "kube-api-access-5ntxf") pod "498ed84d-af03-4ccb-bc46-3d1f8ca8861a" (UID: "498ed84d-af03-4ccb-bc46-3d1f8ca8861a"). InnerVolumeSpecName "kube-api-access-5ntxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.297160 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts" (OuterVolumeSpecName: "scripts") pod "498ed84d-af03-4ccb-bc46-3d1f8ca8861a" (UID: "498ed84d-af03-4ccb-bc46-3d1f8ca8861a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.319443 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "498ed84d-af03-4ccb-bc46-3d1f8ca8861a" (UID: "498ed84d-af03-4ccb-bc46-3d1f8ca8861a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.330742 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data" (OuterVolumeSpecName: "config-data") pod "498ed84d-af03-4ccb-bc46-3d1f8ca8861a" (UID: "498ed84d-af03-4ccb-bc46-3d1f8ca8861a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.388822 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ntxf\" (UniqueName: \"kubernetes.io/projected/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-kube-api-access-5ntxf\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.388900 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.388921 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.388938 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ed84d-af03-4ccb-bc46-3d1f8ca8861a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.829538 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" event={"ID":"498ed84d-af03-4ccb-bc46-3d1f8ca8861a","Type":"ContainerDied","Data":"11dcf61649e55b780fcd94737fb264550a32846241b86a9cd7be47a7c418a6f6"} Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.830091 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11dcf61649e55b780fcd94737fb264550a32846241b86a9cd7be47a7c418a6f6" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.829670 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbzxc" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.945488 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:14:04 crc kubenswrapper[4979]: E0130 23:14:04.946308 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498ed84d-af03-4ccb-bc46-3d1f8ca8861a" containerName="nova-cell0-conductor-db-sync" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.946344 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="498ed84d-af03-4ccb-bc46-3d1f8ca8861a" containerName="nova-cell0-conductor-db-sync" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.947756 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="498ed84d-af03-4ccb-bc46-3d1f8ca8861a" containerName="nova-cell0-conductor-db-sync" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.948790 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.951977 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.957954 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5d7d2" Jan 30 23:14:04 crc kubenswrapper[4979]: I0130 23:14:04.970083 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.107510 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.107769 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjxj\" (UniqueName: \"kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.107832 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.209960 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjxj\" (UniqueName: \"kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.210216 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.210500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.223112 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.225281 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.228495 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjxj\" (UniqueName: \"kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.250413 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.276651 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5d7d2" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.284146 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:05 crc kubenswrapper[4979]: I0130 23:14:05.889163 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:14:06 crc kubenswrapper[4979]: I0130 23:14:06.850074 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fb85a-b378-482c-92d5-34f7f4e99e23","Type":"ContainerStarted","Data":"b1be02d2bf255d1b81aa392709216377316fd4e1a002d3ec334823ab28566e4a"} Jan 30 23:14:06 crc kubenswrapper[4979]: I0130 23:14:06.850492 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fb85a-b378-482c-92d5-34f7f4e99e23","Type":"ContainerStarted","Data":"291809dc6734d5a9dd972c012cc5bf6b3603448d28e739ac608b8b509bef5d72"} Jan 30 23:14:06 crc kubenswrapper[4979]: I0130 23:14:06.850889 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:06 crc kubenswrapper[4979]: I0130 23:14:06.877348 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.877328451 podStartE2EDuration="2.877328451s" podCreationTimestamp="2026-01-30 23:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:06.872471771 +0000 UTC m=+5642.833718814" watchObservedRunningTime="2026-01-30 23:14:06.877328451 +0000 UTC m=+5642.838575484" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.310101 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.752783 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ggn6b"] Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.754785 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.756759 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.758159 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.776490 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggn6b"] Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.814148 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.814223 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbdvw\" (UniqueName: \"kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.814258 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.814356 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.915682 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbdvw\" (UniqueName: \"kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.915781 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.915861 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.915927 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.922720 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.923000 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.923574 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.926482 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.927548 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.929435 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.939108 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.947673 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbdvw\" (UniqueName: \"kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw\") pod \"nova-cell0-cell-mapping-ggn6b\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.965128 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.966631 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:15 crc kubenswrapper[4979]: I0130 23:14:15.968650 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.013223 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018159 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcjfm\" (UniqueName: \"kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018198 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018229 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018287 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018322 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018345 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66wds\" (UniqueName: \"kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.018384 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.060495 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.062090 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.064817 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.076591 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.090527 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.124518 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.124668 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crnht\" (UniqueName: \"kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.124798 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.124884 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.124954 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66wds\" (UniqueName: \"kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.125104 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.125220 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.125316 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcjfm\" (UniqueName: \"kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.125394 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.125474 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.126572 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.136778 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.138176 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.143312 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.148871 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.159705 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.166480 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.168894 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.174542 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.176082 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.179576 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66wds\" (UniqueName: \"kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds\") pod \"nova-metadata-0\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.179919 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.184582 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.190347 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcjfm\" (UniqueName: \"kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227062 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227119 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skvhb\" (UniqueName: \"kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227146 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227213 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crnht\" (UniqueName: \"kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227230 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227247 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227298 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227324 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227344 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227386 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m645l\" (UniqueName: \"kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.227409 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.232384 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.236356 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.253607 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crnht\" (UniqueName: \"kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht\") pod \"nova-scheduler-0\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329678 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329724 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skvhb\" (UniqueName: \"kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329746 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329802 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329820 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329844 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329874 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329919 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m645l\" (UniqueName: \"kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.329934 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.331245 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.331967 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.331973 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.331994 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.335343 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.336481 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.341363 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.343937 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.347780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m645l\" (UniqueName: \"kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l\") pod \"nova-api-0\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.347977 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skvhb\" (UniqueName: \"kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb\") pod \"dnsmasq-dns-7dd456f9c9-9bcrr\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.362064 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.382002 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.561902 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.583667 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.621785 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggn6b"] Jan 30 23:14:16 crc kubenswrapper[4979]: W0130 23:14:16.635819 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39641496_4ab5_48e9_98bf_5627a0a79411.slice/crio-d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8 WatchSource:0}: Error finding container d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8: Status 404 returned error can't find the container with id d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8 Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.818385 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.825916 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: W0130 23:14:16.826107 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f10fb19_9eb0_41eb_ba70_763c84417475.slice/crio-5627b81b735a8cbed66e09b5ef728389679540accf46c25407eeb57a30eabe48 WatchSource:0}: Error finding container 5627b81b735a8cbed66e09b5ef728389679540accf46c25407eeb57a30eabe48: Status 404 returned error can't find the container with id 5627b81b735a8cbed66e09b5ef728389679540accf46c25407eeb57a30eabe48 Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.915354 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxsh"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.917262 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.919547 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.920714 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.928240 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxsh"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.944725 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzm7d\" (UniqueName: \"kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.944773 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.944929 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.944988 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.971883 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.975572 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggn6b" event={"ID":"39641496-4ab5-48e9-98bf-5627a0a79411","Type":"ContainerStarted","Data":"f723b534008a3a9bab8f334c93b4004586730fb78452228a2418e3f55070a126"} Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.975612 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggn6b" event={"ID":"39641496-4ab5-48e9-98bf-5627a0a79411","Type":"ContainerStarted","Data":"d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8"} Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.977684 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerStarted","Data":"60509f738bbb30a36ecad997927d99d02ba1be3a0cb973cc7510d23115c3b2cc"} Jan 30 23:14:16 crc kubenswrapper[4979]: I0130 23:14:16.978654 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0f10fb19-9eb0-41eb-ba70-763c84417475","Type":"ContainerStarted","Data":"5627b81b735a8cbed66e09b5ef728389679540accf46c25407eeb57a30eabe48"} Jan 30 23:14:16 crc kubenswrapper[4979]: W0130 23:14:16.980283 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc44f117_1f6a_4e61_8725_a4740971f42d.slice/crio-286c0deb0a30fe83fe88f556a32c1b1603750fcd6f337d6ac50dabd572b385c8 WatchSource:0}: Error finding container 286c0deb0a30fe83fe88f556a32c1b1603750fcd6f337d6ac50dabd572b385c8: Status 404 returned error can't find the container with id 286c0deb0a30fe83fe88f556a32c1b1603750fcd6f337d6ac50dabd572b385c8 Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.043650 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ggn6b" podStartSLOduration=2.043633455 podStartE2EDuration="2.043633455s" podCreationTimestamp="2026-01-30 23:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:17.000773022 +0000 UTC m=+5652.962020065" watchObservedRunningTime="2026-01-30 23:14:17.043633455 +0000 UTC m=+5653.004880488" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.047759 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.047863 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzm7d\" (UniqueName: \"kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.047881 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.047946 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.048768 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.054913 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.054976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.066690 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.072249 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzm7d\" (UniqueName: \"kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d\") pod \"nova-cell1-conductor-db-sync-wxxsh\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.121139 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.292619 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:17 crc kubenswrapper[4979]: I0130 23:14:17.830697 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxsh"] Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.073168 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc44f117-1f6a-4e61-8725-a4740971f42d","Type":"ContainerStarted","Data":"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.073527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc44f117-1f6a-4e61-8725-a4740971f42d","Type":"ContainerStarted","Data":"286c0deb0a30fe83fe88f556a32c1b1603750fcd6f337d6ac50dabd572b385c8"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.089269 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerStarted","Data":"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.089329 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerStarted","Data":"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.095648 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.095633934 podStartE2EDuration="2.095633934s" podCreationTimestamp="2026-01-30 23:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:18.094639518 +0000 UTC m=+5654.055886551" watchObservedRunningTime="2026-01-30 23:14:18.095633934 +0000 UTC m=+5654.056880967" Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.104480 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerStarted","Data":"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.104528 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerStarted","Data":"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.104539 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerStarted","Data":"b55158532a2f564ce450009ecb5b15953c06c8b9352e105f92880542b2da972c"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.107373 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" event={"ID":"e541a45b-949e-42d3-bbbd-b7fcf76ae045","Type":"ContainerStarted","Data":"83253a6ff4c9a4041197668111d5acbc4ed71199971ffa29a422931ae11b241b"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.113899 4979 generic.go:334] "Generic (PLEG): container finished" podID="065e25fc-286f-4759-9430-a918818caeae" containerID="fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701" exitCode=0 Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.113967 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" event={"ID":"065e25fc-286f-4759-9430-a918818caeae","Type":"ContainerDied","Data":"fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.113993 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" event={"ID":"065e25fc-286f-4759-9430-a918818caeae","Type":"ContainerStarted","Data":"ebd3dade926a167983d467980b49120885f0e096bd8d71d96bc62f48fd9a4976"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.118221 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0f10fb19-9eb0-41eb-ba70-763c84417475","Type":"ContainerStarted","Data":"289ea2f878527cc1ce3d30aa55708642be8e3a359625e35a240bb392a2b265b3"} Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.132583 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.132564317 podStartE2EDuration="3.132564317s" podCreationTimestamp="2026-01-30 23:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:18.12301428 +0000 UTC m=+5654.084261313" watchObservedRunningTime="2026-01-30 23:14:18.132564317 +0000 UTC m=+5654.093811340" Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.153841 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.153817669 podStartE2EDuration="2.153817669s" podCreationTimestamp="2026-01-30 23:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:18.145118905 +0000 UTC m=+5654.106365948" watchObservedRunningTime="2026-01-30 23:14:18.153817669 +0000 UTC m=+5654.115064702" Jan 30 23:14:18 crc kubenswrapper[4979]: I0130 23:14:18.192209 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.19219023 podStartE2EDuration="3.19219023s" podCreationTimestamp="2026-01-30 23:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:18.181618117 +0000 UTC m=+5654.142865150" watchObservedRunningTime="2026-01-30 23:14:18.19219023 +0000 UTC m=+5654.153437263" Jan 30 23:14:19 crc kubenswrapper[4979]: I0130 23:14:19.132216 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" event={"ID":"e541a45b-949e-42d3-bbbd-b7fcf76ae045","Type":"ContainerStarted","Data":"658a0275a71d4694f41d9631d5946d0fa7658e2fdfd136878a24bb61565abcdf"} Jan 30 23:14:19 crc kubenswrapper[4979]: I0130 23:14:19.137827 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" event={"ID":"065e25fc-286f-4759-9430-a918818caeae","Type":"ContainerStarted","Data":"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75"} Jan 30 23:14:19 crc kubenswrapper[4979]: I0130 23:14:19.139177 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:19 crc kubenswrapper[4979]: I0130 23:14:19.174710 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" podStartSLOduration=3.174687721 podStartE2EDuration="3.174687721s" podCreationTimestamp="2026-01-30 23:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:19.163367457 +0000 UTC m=+5655.124614520" watchObservedRunningTime="2026-01-30 23:14:19.174687721 +0000 UTC m=+5655.135934754" Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.156488 4979 generic.go:334] "Generic (PLEG): container finished" podID="e541a45b-949e-42d3-bbbd-b7fcf76ae045" containerID="658a0275a71d4694f41d9631d5946d0fa7658e2fdfd136878a24bb61565abcdf" exitCode=0 Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.156563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" event={"ID":"e541a45b-949e-42d3-bbbd-b7fcf76ae045","Type":"ContainerDied","Data":"658a0275a71d4694f41d9631d5946d0fa7658e2fdfd136878a24bb61565abcdf"} Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.178274 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" podStartSLOduration=5.178255699 podStartE2EDuration="5.178255699s" podCreationTimestamp="2026-01-30 23:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:19.196846557 +0000 UTC m=+5655.158093590" watchObservedRunningTime="2026-01-30 23:14:21.178255699 +0000 UTC m=+5657.139502732" Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.341925 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.363122 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.363185 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:21 crc kubenswrapper[4979]: I0130 23:14:21.382574 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.174498 4979 generic.go:334] "Generic (PLEG): container finished" podID="39641496-4ab5-48e9-98bf-5627a0a79411" containerID="f723b534008a3a9bab8f334c93b4004586730fb78452228a2418e3f55070a126" exitCode=0 Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.174571 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggn6b" event={"ID":"39641496-4ab5-48e9-98bf-5627a0a79411","Type":"ContainerDied","Data":"f723b534008a3a9bab8f334c93b4004586730fb78452228a2418e3f55070a126"} Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.629839 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.776616 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle\") pod \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.776744 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data\") pod \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.776780 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzm7d\" (UniqueName: \"kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d\") pod \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.776828 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts\") pod \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\" (UID: \"e541a45b-949e-42d3-bbbd-b7fcf76ae045\") " Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.791670 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d" (OuterVolumeSpecName: "kube-api-access-tzm7d") pod "e541a45b-949e-42d3-bbbd-b7fcf76ae045" (UID: "e541a45b-949e-42d3-bbbd-b7fcf76ae045"). InnerVolumeSpecName "kube-api-access-tzm7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.791828 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts" (OuterVolumeSpecName: "scripts") pod "e541a45b-949e-42d3-bbbd-b7fcf76ae045" (UID: "e541a45b-949e-42d3-bbbd-b7fcf76ae045"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.814763 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e541a45b-949e-42d3-bbbd-b7fcf76ae045" (UID: "e541a45b-949e-42d3-bbbd-b7fcf76ae045"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.817937 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data" (OuterVolumeSpecName: "config-data") pod "e541a45b-949e-42d3-bbbd-b7fcf76ae045" (UID: "e541a45b-949e-42d3-bbbd-b7fcf76ae045"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.878689 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.878730 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.878743 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e541a45b-949e-42d3-bbbd-b7fcf76ae045-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:22 crc kubenswrapper[4979]: I0130 23:14:22.878756 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzm7d\" (UniqueName: \"kubernetes.io/projected/e541a45b-949e-42d3-bbbd-b7fcf76ae045-kube-api-access-tzm7d\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:23 crc kubenswrapper[4979]: E0130 23:14:23.171455 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode541a45b_949e_42d3_bbbd_b7fcf76ae045.slice/crio-83253a6ff4c9a4041197668111d5acbc4ed71199971ffa29a422931ae11b241b\": RecentStats: unable to find data in memory cache]" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.189301 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" event={"ID":"e541a45b-949e-42d3-bbbd-b7fcf76ae045","Type":"ContainerDied","Data":"83253a6ff4c9a4041197668111d5acbc4ed71199971ffa29a422931ae11b241b"} Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.189331 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxsh" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.189354 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83253a6ff4c9a4041197668111d5acbc4ed71199971ffa29a422931ae11b241b" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.622298 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.757223 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:14:23 crc kubenswrapper[4979]: E0130 23:14:23.758324 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39641496-4ab5-48e9-98bf-5627a0a79411" containerName="nova-manage" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.758355 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="39641496-4ab5-48e9-98bf-5627a0a79411" containerName="nova-manage" Jan 30 23:14:23 crc kubenswrapper[4979]: E0130 23:14:23.758378 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e541a45b-949e-42d3-bbbd-b7fcf76ae045" containerName="nova-cell1-conductor-db-sync" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.758385 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e541a45b-949e-42d3-bbbd-b7fcf76ae045" containerName="nova-cell1-conductor-db-sync" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.758555 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e541a45b-949e-42d3-bbbd-b7fcf76ae045" containerName="nova-cell1-conductor-db-sync" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.758572 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="39641496-4ab5-48e9-98bf-5627a0a79411" containerName="nova-manage" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.759650 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.762238 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.768589 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.799100 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data\") pod \"39641496-4ab5-48e9-98bf-5627a0a79411\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.799222 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle\") pod \"39641496-4ab5-48e9-98bf-5627a0a79411\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.799269 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbdvw\" (UniqueName: \"kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw\") pod \"39641496-4ab5-48e9-98bf-5627a0a79411\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.799307 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts\") pod \"39641496-4ab5-48e9-98bf-5627a0a79411\" (UID: \"39641496-4ab5-48e9-98bf-5627a0a79411\") " Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.804829 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw" (OuterVolumeSpecName: "kube-api-access-gbdvw") pod "39641496-4ab5-48e9-98bf-5627a0a79411" (UID: "39641496-4ab5-48e9-98bf-5627a0a79411"). InnerVolumeSpecName "kube-api-access-gbdvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.806010 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts" (OuterVolumeSpecName: "scripts") pod "39641496-4ab5-48e9-98bf-5627a0a79411" (UID: "39641496-4ab5-48e9-98bf-5627a0a79411"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.822876 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data" (OuterVolumeSpecName: "config-data") pod "39641496-4ab5-48e9-98bf-5627a0a79411" (UID: "39641496-4ab5-48e9-98bf-5627a0a79411"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.823213 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39641496-4ab5-48e9-98bf-5627a0a79411" (UID: "39641496-4ab5-48e9-98bf-5627a0a79411"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.901915 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.902367 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qld68\" (UniqueName: \"kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.903087 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.903289 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.903312 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.903324 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbdvw\" (UniqueName: \"kubernetes.io/projected/39641496-4ab5-48e9-98bf-5627a0a79411-kube-api-access-gbdvw\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:23 crc kubenswrapper[4979]: I0130 23:14:23.903334 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39641496-4ab5-48e9-98bf-5627a0a79411-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.004844 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.004920 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.004957 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qld68\" (UniqueName: \"kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.008611 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.011569 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.020668 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qld68\" (UniqueName: \"kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68\") pod \"nova-cell1-conductor-0\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.076745 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.243295 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggn6b" event={"ID":"39641496-4ab5-48e9-98bf-5627a0a79411","Type":"ContainerDied","Data":"d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8"} Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.243347 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4a0aae41ebe14083fbbf538879c4cc70dfc273dbb74e8da5e5c7bcc5610a3c8" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.243434 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggn6b" Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.408682 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.409279 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-log" containerID="cri-o://7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" gracePeriod=30 Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.409310 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-api" containerID="cri-o://efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" gracePeriod=30 Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.430391 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.430646 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bc44f117-1f6a-4e61-8725-a4740971f42d" containerName="nova-scheduler-scheduler" containerID="cri-o://0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e" gracePeriod=30 Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.439451 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.439672 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-log" containerID="cri-o://1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" gracePeriod=30 Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.439763 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-metadata" containerID="cri-o://65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" gracePeriod=30 Jan 30 23:14:24 crc kubenswrapper[4979]: I0130 23:14:24.570138 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.025914 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.039834 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141080 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs\") pod \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141151 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66wds\" (UniqueName: \"kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds\") pod \"8cdb73b7-0e45-491f-b17f-a867667c059f\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141191 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs\") pod \"8cdb73b7-0e45-491f-b17f-a867667c059f\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141281 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle\") pod \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141354 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle\") pod \"8cdb73b7-0e45-491f-b17f-a867667c059f\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141378 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data\") pod \"8cdb73b7-0e45-491f-b17f-a867667c059f\" (UID: \"8cdb73b7-0e45-491f-b17f-a867667c059f\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141403 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data\") pod \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141426 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m645l\" (UniqueName: \"kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l\") pod \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\" (UID: \"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8\") " Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141564 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs" (OuterVolumeSpecName: "logs") pod "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" (UID: "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.141807 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.142276 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs" (OuterVolumeSpecName: "logs") pod "8cdb73b7-0e45-491f-b17f-a867667c059f" (UID: "8cdb73b7-0e45-491f-b17f-a867667c059f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.147247 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l" (OuterVolumeSpecName: "kube-api-access-m645l") pod "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" (UID: "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8"). InnerVolumeSpecName "kube-api-access-m645l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.148386 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds" (OuterVolumeSpecName: "kube-api-access-66wds") pod "8cdb73b7-0e45-491f-b17f-a867667c059f" (UID: "8cdb73b7-0e45-491f-b17f-a867667c059f"). InnerVolumeSpecName "kube-api-access-66wds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.165887 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cdb73b7-0e45-491f-b17f-a867667c059f" (UID: "8cdb73b7-0e45-491f-b17f-a867667c059f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.166651 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data" (OuterVolumeSpecName: "config-data") pod "8cdb73b7-0e45-491f-b17f-a867667c059f" (UID: "8cdb73b7-0e45-491f-b17f-a867667c059f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.169360 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data" (OuterVolumeSpecName: "config-data") pod "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" (UID: "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.171870 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" (UID: "d27e257b-60c4-4b97-a4e7-f28a1f7d59d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243871 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243914 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdb73b7-0e45-491f-b17f-a867667c059f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243927 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243940 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m645l\" (UniqueName: \"kubernetes.io/projected/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-kube-api-access-m645l\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243960 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66wds\" (UniqueName: \"kubernetes.io/projected/8cdb73b7-0e45-491f-b17f-a867667c059f-kube-api-access-66wds\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243975 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdb73b7-0e45-491f-b17f-a867667c059f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.243993 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.253202 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b128be7-1d02-4fdc-aa5d-356001e694ce","Type":"ContainerStarted","Data":"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.253242 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b128be7-1d02-4fdc-aa5d-356001e694ce","Type":"ContainerStarted","Data":"dbd9dee23baab194c4b7ba7a0c9558a9771dc7905ed62cf49005905c307d1f4a"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254663 4979 generic.go:334] "Generic (PLEG): container finished" podID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerID="65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" exitCode=0 Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254691 4979 generic.go:334] "Generic (PLEG): container finished" podID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerID="1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" exitCode=143 Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254718 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254748 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerDied","Data":"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254783 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerDied","Data":"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254798 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cdb73b7-0e45-491f-b17f-a867667c059f","Type":"ContainerDied","Data":"60509f738bbb30a36ecad997927d99d02ba1be3a0cb973cc7510d23115c3b2cc"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.254814 4979 scope.go:117] "RemoveContainer" containerID="65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263674 4979 generic.go:334] "Generic (PLEG): container finished" podID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerID="efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" exitCode=0 Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263816 4979 generic.go:334] "Generic (PLEG): container finished" podID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerID="7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" exitCode=143 Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263814 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerDied","Data":"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263870 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerDied","Data":"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263882 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27e257b-60c4-4b97-a4e7-f28a1f7d59d8","Type":"ContainerDied","Data":"b55158532a2f564ce450009ecb5b15953c06c8b9352e105f92880542b2da972c"} Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.263793 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.284493 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.284477719 podStartE2EDuration="2.284477719s" podCreationTimestamp="2026-01-30 23:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:25.278323473 +0000 UTC m=+5661.239570526" watchObservedRunningTime="2026-01-30 23:14:25.284477719 +0000 UTC m=+5661.245724752" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.296145 4979 scope.go:117] "RemoveContainer" containerID="1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.309848 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.323369 4979 scope.go:117] "RemoveContainer" containerID="65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.324002 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70\": container with ID starting with 65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70 not found: ID does not exist" containerID="65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.324108 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70"} err="failed to get container status \"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70\": rpc error: code = NotFound desc = could not find container \"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70\": container with ID starting with 65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.324149 4979 scope.go:117] "RemoveContainer" containerID="1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.324519 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8\": container with ID starting with 1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8 not found: ID does not exist" containerID="1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.324555 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8"} err="failed to get container status \"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8\": rpc error: code = NotFound desc = could not find container \"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8\": container with ID starting with 1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.324577 4979 scope.go:117] "RemoveContainer" containerID="65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.324845 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70"} err="failed to get container status \"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70\": rpc error: code = NotFound desc = could not find container \"65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70\": container with ID starting with 65365b13a72c350a032aadfeae6d21dfdd46cb0e30678381073df8e751405f70 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.326559 4979 scope.go:117] "RemoveContainer" containerID="1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.328526 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8"} err="failed to get container status \"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8\": rpc error: code = NotFound desc = could not find container \"1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8\": container with ID starting with 1b65f102ef3250d336fa961bebe2f5e34d9232296dc6587fb66e55fa904614d8 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.328578 4979 scope.go:117] "RemoveContainer" containerID="efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.333272 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.342772 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.343175 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-metadata" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343189 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-metadata" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.343205 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-api" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343211 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-api" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.343227 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-log" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343234 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-log" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.343270 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-log" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343276 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-log" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343442 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-log" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343456 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-api" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343487 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" containerName="nova-metadata-metadata" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.343500 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" containerName="nova-api-log" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.344437 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.347728 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.354737 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.364685 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.372150 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.379168 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.380630 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.385524 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.401288 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.423898 4979 scope.go:117] "RemoveContainer" containerID="7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.446812 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5sdc\" (UniqueName: \"kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.446862 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.446955 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.446974 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.451805 4979 scope.go:117] "RemoveContainer" containerID="efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.452191 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740\": container with ID starting with efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740 not found: ID does not exist" containerID="efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.452242 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740"} err="failed to get container status \"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740\": rpc error: code = NotFound desc = could not find container \"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740\": container with ID starting with efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.452270 4979 scope.go:117] "RemoveContainer" containerID="7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" Jan 30 23:14:25 crc kubenswrapper[4979]: E0130 23:14:25.454409 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129\": container with ID starting with 7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129 not found: ID does not exist" containerID="7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.454436 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129"} err="failed to get container status \"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129\": rpc error: code = NotFound desc = could not find container \"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129\": container with ID starting with 7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.454455 4979 scope.go:117] "RemoveContainer" containerID="efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.454651 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740"} err="failed to get container status \"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740\": rpc error: code = NotFound desc = could not find container \"efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740\": container with ID starting with efbfe9df976b9c0ace8227c63b3eb965ddc33a6c4afbc76d42f45a01eee0b740 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.454673 4979 scope.go:117] "RemoveContainer" containerID="7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.454833 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129"} err="failed to get container status \"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129\": rpc error: code = NotFound desc = could not find container \"7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129\": container with ID starting with 7ecdb2d044a4fe761ca9df7fd9cd7d5643de0c3f63d9a04daf4090940187f129 not found: ID does not exist" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548362 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548417 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt98x\" (UniqueName: \"kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548461 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548512 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5sdc\" (UniqueName: \"kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548536 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548557 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548600 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.548625 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.549086 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.554144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.554866 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.565265 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5sdc\" (UniqueName: \"kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc\") pod \"nova-metadata-0\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.650386 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.650439 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt98x\" (UniqueName: \"kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.650481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.650549 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.651136 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.653565 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.653950 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.666595 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt98x\" (UniqueName: \"kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x\") pod \"nova-api-0\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " pod="openstack/nova-api-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.713330 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:25 crc kubenswrapper[4979]: I0130 23:14:25.724976 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.159083 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.249407 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:26 crc kubenswrapper[4979]: W0130 23:14:26.265172 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode85c5102_a753_4ad3_9105_8d3071189381.slice/crio-6f704b76cb04ca4ca5094c825c5095f81455be20961dca8514abd07c7261665e WatchSource:0}: Error finding container 6f704b76cb04ca4ca5094c825c5095f81455be20961dca8514abd07c7261665e: Status 404 returned error can't find the container with id 6f704b76cb04ca4ca5094c825c5095f81455be20961dca8514abd07c7261665e Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.274499 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerStarted","Data":"49b78e9a8a12dce88ac01cd85c6a5a960d97ca2bea386aff319c4f3edf124c27"} Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.277081 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.343102 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.356068 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.565059 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.645825 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.646206 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="dnsmasq-dns" containerID="cri-o://6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed" gracePeriod=10 Jan 30 23:14:26 crc kubenswrapper[4979]: I0130 23:14:26.847763 4979 scope.go:117] "RemoveContainer" containerID="b426269bcda15bff5775ef4940ae8834e27498d1a643891649e2cb2da0fea350" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.084273 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cdb73b7-0e45-491f-b17f-a867667c059f" path="/var/lib/kubelet/pods/8cdb73b7-0e45-491f-b17f-a867667c059f/volumes" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.085361 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d27e257b-60c4-4b97-a4e7-f28a1f7d59d8" path="/var/lib/kubelet/pods/d27e257b-60c4-4b97-a4e7-f28a1f7d59d8/volumes" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.123984 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.285825 4979 generic.go:334] "Generic (PLEG): container finished" podID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerID="6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed" exitCode=0 Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.285904 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" event={"ID":"3c13ddd7-ca9f-4446-a482-09cf5b71ced0","Type":"ContainerDied","Data":"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.285936 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" event={"ID":"3c13ddd7-ca9f-4446-a482-09cf5b71ced0","Type":"ContainerDied","Data":"cd06f6a3729d6e12fae56c41ee58dc413dd41675985b230257d9e7d128d1839d"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.285954 4979 scope.go:117] "RemoveContainer" containerID="6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.285956 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd9666d5-fmcqm" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.287190 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dmf7\" (UniqueName: \"kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7\") pod \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.287236 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb\") pod \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.287417 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc\") pod \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.287442 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb\") pod \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.287534 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config\") pod \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\" (UID: \"3c13ddd7-ca9f-4446-a482-09cf5b71ced0\") " Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.293274 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerStarted","Data":"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.293314 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerStarted","Data":"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.293326 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerStarted","Data":"6f704b76cb04ca4ca5094c825c5095f81455be20961dca8514abd07c7261665e"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.303159 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7" (OuterVolumeSpecName: "kube-api-access-7dmf7") pod "3c13ddd7-ca9f-4446-a482-09cf5b71ced0" (UID: "3c13ddd7-ca9f-4446-a482-09cf5b71ced0"). InnerVolumeSpecName "kube-api-access-7dmf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.303612 4979 scope.go:117] "RemoveContainer" containerID="e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.305828 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerStarted","Data":"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.305859 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerStarted","Data":"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77"} Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.317422 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.322223 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.322204045 podStartE2EDuration="2.322204045s" podCreationTimestamp="2026-01-30 23:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:27.3101325 +0000 UTC m=+5663.271379533" watchObservedRunningTime="2026-01-30 23:14:27.322204045 +0000 UTC m=+5663.283451088" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.327747 4979 scope.go:117] "RemoveContainer" containerID="6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed" Jan 30 23:14:27 crc kubenswrapper[4979]: E0130 23:14:27.330316 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed\": container with ID starting with 6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed not found: ID does not exist" containerID="6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.330371 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed"} err="failed to get container status \"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed\": rpc error: code = NotFound desc = could not find container \"6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed\": container with ID starting with 6d2efb06ce76a618f7c03c02b416ae605e07d5a002ba230284df10f8c71cc4ed not found: ID does not exist" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.330400 4979 scope.go:117] "RemoveContainer" containerID="e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e" Jan 30 23:14:27 crc kubenswrapper[4979]: E0130 23:14:27.331087 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e\": container with ID starting with e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e not found: ID does not exist" containerID="e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.331143 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e"} err="failed to get container status \"e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e\": rpc error: code = NotFound desc = could not find container \"e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e\": container with ID starting with e0b3a93af4d4edf676599064487c400cb46eb1323b00a7481c5f223058c1755e not found: ID does not exist" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.340676 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.340631461 podStartE2EDuration="2.340631461s" podCreationTimestamp="2026-01-30 23:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:27.329498991 +0000 UTC m=+5663.290746024" watchObservedRunningTime="2026-01-30 23:14:27.340631461 +0000 UTC m=+5663.301878494" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.343837 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c13ddd7-ca9f-4446-a482-09cf5b71ced0" (UID: "3c13ddd7-ca9f-4446-a482-09cf5b71ced0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.364694 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c13ddd7-ca9f-4446-a482-09cf5b71ced0" (UID: "3c13ddd7-ca9f-4446-a482-09cf5b71ced0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.370146 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config" (OuterVolumeSpecName: "config") pod "3c13ddd7-ca9f-4446-a482-09cf5b71ced0" (UID: "3c13ddd7-ca9f-4446-a482-09cf5b71ced0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.390836 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c13ddd7-ca9f-4446-a482-09cf5b71ced0" (UID: "3c13ddd7-ca9f-4446-a482-09cf5b71ced0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.393090 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.393127 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.393136 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.393150 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dmf7\" (UniqueName: \"kubernetes.io/projected/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-kube-api-access-7dmf7\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.393162 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c13ddd7-ca9f-4446-a482-09cf5b71ced0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.621653 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:14:27 crc kubenswrapper[4979]: I0130 23:14:27.633519 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fd9666d5-fmcqm"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.079467 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" path="/var/lib/kubelet/pods/3c13ddd7-ca9f-4446-a482-09cf5b71ced0/volumes" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.101608 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.145317 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.227516 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle\") pod \"bc44f117-1f6a-4e61-8725-a4740971f42d\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.227858 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crnht\" (UniqueName: \"kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht\") pod \"bc44f117-1f6a-4e61-8725-a4740971f42d\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.228053 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data\") pod \"bc44f117-1f6a-4e61-8725-a4740971f42d\" (UID: \"bc44f117-1f6a-4e61-8725-a4740971f42d\") " Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.233102 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht" (OuterVolumeSpecName: "kube-api-access-crnht") pod "bc44f117-1f6a-4e61-8725-a4740971f42d" (UID: "bc44f117-1f6a-4e61-8725-a4740971f42d"). InnerVolumeSpecName "kube-api-access-crnht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.253308 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data" (OuterVolumeSpecName: "config-data") pod "bc44f117-1f6a-4e61-8725-a4740971f42d" (UID: "bc44f117-1f6a-4e61-8725-a4740971f42d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.253796 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc44f117-1f6a-4e61-8725-a4740971f42d" (UID: "bc44f117-1f6a-4e61-8725-a4740971f42d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.322399 4979 generic.go:334] "Generic (PLEG): container finished" podID="bc44f117-1f6a-4e61-8725-a4740971f42d" containerID="0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e" exitCode=0 Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.322445 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc44f117-1f6a-4e61-8725-a4740971f42d","Type":"ContainerDied","Data":"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e"} Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.322469 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc44f117-1f6a-4e61-8725-a4740971f42d","Type":"ContainerDied","Data":"286c0deb0a30fe83fe88f556a32c1b1603750fcd6f337d6ac50dabd572b385c8"} Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.322485 4979 scope.go:117] "RemoveContainer" containerID="0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.322590 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.331383 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.331622 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crnht\" (UniqueName: \"kubernetes.io/projected/bc44f117-1f6a-4e61-8725-a4740971f42d-kube-api-access-crnht\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.331774 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc44f117-1f6a-4e61-8725-a4740971f42d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.381440 4979 scope.go:117] "RemoveContainer" containerID="0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e" Jan 30 23:14:29 crc kubenswrapper[4979]: E0130 23:14:29.381989 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e\": container with ID starting with 0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e not found: ID does not exist" containerID="0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.382021 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e"} err="failed to get container status \"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e\": rpc error: code = NotFound desc = could not find container \"0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e\": container with ID starting with 0d83345a28ccf3487ecdbca507a6ea17582fea42fa75397d78f1918564e46c0e not found: ID does not exist" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.385721 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.397335 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.404848 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:29 crc kubenswrapper[4979]: E0130 23:14:29.405328 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc44f117-1f6a-4e61-8725-a4740971f42d" containerName="nova-scheduler-scheduler" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.405344 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc44f117-1f6a-4e61-8725-a4740971f42d" containerName="nova-scheduler-scheduler" Jan 30 23:14:29 crc kubenswrapper[4979]: E0130 23:14:29.405367 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="init" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.405376 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="init" Jan 30 23:14:29 crc kubenswrapper[4979]: E0130 23:14:29.405393 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="dnsmasq-dns" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.405401 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="dnsmasq-dns" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.405646 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c13ddd7-ca9f-4446-a482-09cf5b71ced0" containerName="dnsmasq-dns" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.405658 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc44f117-1f6a-4e61-8725-a4740971f42d" containerName="nova-scheduler-scheduler" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.406428 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.408525 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.414099 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.535171 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.535246 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzts\" (UniqueName: \"kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.535267 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.604388 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jzkql"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.606064 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.608981 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.609163 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.618492 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jzkql"] Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.637149 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.637213 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzts\" (UniqueName: \"kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.637235 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.640540 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.642889 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.664186 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzts\" (UniqueName: \"kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts\") pod \"nova-scheduler-0\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.725713 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.739368 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhw5t\" (UniqueName: \"kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.739453 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.739596 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.739985 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.841250 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.841313 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhw5t\" (UniqueName: \"kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.841358 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.841380 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.846713 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.848828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.849340 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.859726 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhw5t\" (UniqueName: \"kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t\") pod \"nova-cell1-cell-mapping-jzkql\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:29 crc kubenswrapper[4979]: I0130 23:14:29.922703 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:30 crc kubenswrapper[4979]: I0130 23:14:30.218677 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:30 crc kubenswrapper[4979]: W0130 23:14:30.221765 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41c9f1d3_a870_4b2f_bc60_e2a13d520664.slice/crio-24859710f66428c4a027d6fe270a53ad113ef34701af4e49c0218976f8414555 WatchSource:0}: Error finding container 24859710f66428c4a027d6fe270a53ad113ef34701af4e49c0218976f8414555: Status 404 returned error can't find the container with id 24859710f66428c4a027d6fe270a53ad113ef34701af4e49c0218976f8414555 Jan 30 23:14:30 crc kubenswrapper[4979]: I0130 23:14:30.339748 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41c9f1d3-a870-4b2f-bc60-e2a13d520664","Type":"ContainerStarted","Data":"24859710f66428c4a027d6fe270a53ad113ef34701af4e49c0218976f8414555"} Jan 30 23:14:30 crc kubenswrapper[4979]: W0130 23:14:30.368363 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0c7f950_be1a_4557_8548_d41ac49e8010.slice/crio-66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51 WatchSource:0}: Error finding container 66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51: Status 404 returned error can't find the container with id 66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51 Jan 30 23:14:30 crc kubenswrapper[4979]: I0130 23:14:30.368824 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jzkql"] Jan 30 23:14:30 crc kubenswrapper[4979]: I0130 23:14:30.713883 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:30 crc kubenswrapper[4979]: I0130 23:14:30.714235 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.088407 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc44f117-1f6a-4e61-8725-a4740971f42d" path="/var/lib/kubelet/pods/bc44f117-1f6a-4e61-8725-a4740971f42d/volumes" Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.351356 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jzkql" event={"ID":"a0c7f950-be1a-4557-8548-d41ac49e8010","Type":"ContainerStarted","Data":"2e5921219826ad4f6046a051d3c3a9bd5014518b8ece445c4e2400e7ac7d238a"} Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.351418 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jzkql" event={"ID":"a0c7f950-be1a-4557-8548-d41ac49e8010","Type":"ContainerStarted","Data":"66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51"} Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.353697 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41c9f1d3-a870-4b2f-bc60-e2a13d520664","Type":"ContainerStarted","Data":"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74"} Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.380779 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jzkql" podStartSLOduration=2.380756984 podStartE2EDuration="2.380756984s" podCreationTimestamp="2026-01-30 23:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:31.375505503 +0000 UTC m=+5667.336752556" watchObservedRunningTime="2026-01-30 23:14:31.380756984 +0000 UTC m=+5667.342004027" Jan 30 23:14:31 crc kubenswrapper[4979]: I0130 23:14:31.412648 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.412625811 podStartE2EDuration="2.412625811s" podCreationTimestamp="2026-01-30 23:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:31.404095841 +0000 UTC m=+5667.365342894" watchObservedRunningTime="2026-01-30 23:14:31.412625811 +0000 UTC m=+5667.373872854" Jan 30 23:14:32 crc kubenswrapper[4979]: I0130 23:14:32.039562 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:14:32 crc kubenswrapper[4979]: I0130 23:14:32.039903 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:14:34 crc kubenswrapper[4979]: I0130 23:14:34.726827 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.394712 4979 generic.go:334] "Generic (PLEG): container finished" podID="a0c7f950-be1a-4557-8548-d41ac49e8010" containerID="2e5921219826ad4f6046a051d3c3a9bd5014518b8ece445c4e2400e7ac7d238a" exitCode=0 Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.394772 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jzkql" event={"ID":"a0c7f950-be1a-4557-8548-d41ac49e8010","Type":"ContainerDied","Data":"2e5921219826ad4f6046a051d3c3a9bd5014518b8ece445c4e2400e7ac7d238a"} Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.714501 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.714549 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.726326 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:14:35 crc kubenswrapper[4979]: I0130 23:14:35.726367 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.788662 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.881207 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.71:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.881530 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.71:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.881594 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.72:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.881475 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.72:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.911505 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data\") pod \"a0c7f950-be1a-4557-8548-d41ac49e8010\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.911662 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle\") pod \"a0c7f950-be1a-4557-8548-d41ac49e8010\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.911739 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts\") pod \"a0c7f950-be1a-4557-8548-d41ac49e8010\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.911803 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhw5t\" (UniqueName: \"kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t\") pod \"a0c7f950-be1a-4557-8548-d41ac49e8010\" (UID: \"a0c7f950-be1a-4557-8548-d41ac49e8010\") " Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.916299 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t" (OuterVolumeSpecName: "kube-api-access-lhw5t") pod "a0c7f950-be1a-4557-8548-d41ac49e8010" (UID: "a0c7f950-be1a-4557-8548-d41ac49e8010"). InnerVolumeSpecName "kube-api-access-lhw5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.916529 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts" (OuterVolumeSpecName: "scripts") pod "a0c7f950-be1a-4557-8548-d41ac49e8010" (UID: "a0c7f950-be1a-4557-8548-d41ac49e8010"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.943279 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0c7f950-be1a-4557-8548-d41ac49e8010" (UID: "a0c7f950-be1a-4557-8548-d41ac49e8010"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:36 crc kubenswrapper[4979]: I0130 23:14:36.957191 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data" (OuterVolumeSpecName: "config-data") pod "a0c7f950-be1a-4557-8548-d41ac49e8010" (UID: "a0c7f950-be1a-4557-8548-d41ac49e8010"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.016151 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.016181 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.016191 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0c7f950-be1a-4557-8548-d41ac49e8010-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.016200 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhw5t\" (UniqueName: \"kubernetes.io/projected/a0c7f950-be1a-4557-8548-d41ac49e8010-kube-api-access-lhw5t\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.411413 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jzkql" event={"ID":"a0c7f950-be1a-4557-8548-d41ac49e8010","Type":"ContainerDied","Data":"66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51"} Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.411452 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66ad7d07fa8d076558fdd0b43add46e88601d8b749404df796680dde7f854b51" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.411469 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jzkql" Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.507195 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.507794 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" containerName="nova-scheduler-scheduler" containerID="cri-o://c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74" gracePeriod=30 Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.516170 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.516373 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-log" containerID="cri-o://49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6" gracePeriod=30 Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.516506 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-api" containerID="cri-o://65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283" gracePeriod=30 Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.530682 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.531934 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-log" containerID="cri-o://15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77" gracePeriod=30 Jan 30 23:14:37 crc kubenswrapper[4979]: I0130 23:14:37.532251 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-metadata" containerID="cri-o://296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b" gracePeriod=30 Jan 30 23:14:38 crc kubenswrapper[4979]: I0130 23:14:38.420588 4979 generic.go:334] "Generic (PLEG): container finished" podID="e85c5102-a753-4ad3-9105-8d3071189381" containerID="49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6" exitCode=143 Jan 30 23:14:38 crc kubenswrapper[4979]: I0130 23:14:38.420666 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerDied","Data":"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6"} Jan 30 23:14:38 crc kubenswrapper[4979]: I0130 23:14:38.422672 4979 generic.go:334] "Generic (PLEG): container finished" podID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerID="15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77" exitCode=143 Jan 30 23:14:38 crc kubenswrapper[4979]: I0130 23:14:38.422707 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerDied","Data":"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77"} Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.296018 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.405986 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle\") pod \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.406402 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5sdc\" (UniqueName: \"kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc\") pod \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.406427 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data\") pod \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.406482 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs\") pod \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\" (UID: \"ecb82300-1ad1-4a3e-aba6-3635e79512a7\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.407086 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs" (OuterVolumeSpecName: "logs") pod "ecb82300-1ad1-4a3e-aba6-3635e79512a7" (UID: "ecb82300-1ad1-4a3e-aba6-3635e79512a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.425855 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc" (OuterVolumeSpecName: "kube-api-access-k5sdc") pod "ecb82300-1ad1-4a3e-aba6-3635e79512a7" (UID: "ecb82300-1ad1-4a3e-aba6-3635e79512a7"). InnerVolumeSpecName "kube-api-access-k5sdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.430449 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecb82300-1ad1-4a3e-aba6-3635e79512a7" (UID: "ecb82300-1ad1-4a3e-aba6-3635e79512a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.435731 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data" (OuterVolumeSpecName: "config-data") pod "ecb82300-1ad1-4a3e-aba6-3635e79512a7" (UID: "ecb82300-1ad1-4a3e-aba6-3635e79512a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.462367 4979 generic.go:334] "Generic (PLEG): container finished" podID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerID="296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b" exitCode=0 Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.462420 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerDied","Data":"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b"} Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.462457 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ecb82300-1ad1-4a3e-aba6-3635e79512a7","Type":"ContainerDied","Data":"49b78e9a8a12dce88ac01cd85c6a5a960d97ca2bea386aff319c4f3edf124c27"} Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.462478 4979 scope.go:117] "RemoveContainer" containerID="296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.462615 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.509116 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.509164 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5sdc\" (UniqueName: \"kubernetes.io/projected/ecb82300-1ad1-4a3e-aba6-3635e79512a7-kube-api-access-k5sdc\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.509175 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb82300-1ad1-4a3e-aba6-3635e79512a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.509184 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb82300-1ad1-4a3e-aba6-3635e79512a7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.573388 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.582525 4979 scope.go:117] "RemoveContainer" containerID="15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.593634 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.603280 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:41 crc kubenswrapper[4979]: E0130 23:14:41.603777 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-metadata" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.603798 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-metadata" Jan 30 23:14:41 crc kubenswrapper[4979]: E0130 23:14:41.603812 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c7f950-be1a-4557-8548-d41ac49e8010" containerName="nova-manage" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.603818 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c7f950-be1a-4557-8548-d41ac49e8010" containerName="nova-manage" Jan 30 23:14:41 crc kubenswrapper[4979]: E0130 23:14:41.603840 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-log" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.603847 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-log" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.604095 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c7f950-be1a-4557-8548-d41ac49e8010" containerName="nova-manage" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.604133 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-log" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.604143 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" containerName="nova-metadata-metadata" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.606928 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.612586 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.613228 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.621806 4979 scope.go:117] "RemoveContainer" containerID="296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b" Jan 30 23:14:41 crc kubenswrapper[4979]: E0130 23:14:41.624821 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b\": container with ID starting with 296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b not found: ID does not exist" containerID="296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.624855 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b"} err="failed to get container status \"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b\": rpc error: code = NotFound desc = could not find container \"296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b\": container with ID starting with 296ef9e1971275b044b56f2403fdc41b528cee3e54bf655311a51631918a6f1b not found: ID does not exist" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.624880 4979 scope.go:117] "RemoveContainer" containerID="15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77" Jan 30 23:14:41 crc kubenswrapper[4979]: E0130 23:14:41.625304 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77\": container with ID starting with 15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77 not found: ID does not exist" containerID="15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.625333 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77"} err="failed to get container status \"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77\": rpc error: code = NotFound desc = could not find container \"15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77\": container with ID starting with 15c26d41be8460784acc2e227e15212c6b1a0a643ed9c6685b1e939b85bedb77 not found: ID does not exist" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.716420 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.716741 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.717006 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.717099 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzlfd\" (UniqueName: \"kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.785236 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.818519 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.818630 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.818660 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzlfd\" (UniqueName: \"kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.818683 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.819577 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.826524 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.826612 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.836293 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzlfd\" (UniqueName: \"kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd\") pod \"nova-metadata-0\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.919380 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data\") pod \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.919517 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle\") pod \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.919612 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fzts\" (UniqueName: \"kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts\") pod \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\" (UID: \"41c9f1d3-a870-4b2f-bc60-e2a13d520664\") " Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.923511 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts" (OuterVolumeSpecName: "kube-api-access-6fzts") pod "41c9f1d3-a870-4b2f-bc60-e2a13d520664" (UID: "41c9f1d3-a870-4b2f-bc60-e2a13d520664"). InnerVolumeSpecName "kube-api-access-6fzts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.931060 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.953213 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c9f1d3-a870-4b2f-bc60-e2a13d520664" (UID: "41c9f1d3-a870-4b2f-bc60-e2a13d520664"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:41 crc kubenswrapper[4979]: I0130 23:14:41.963617 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data" (OuterVolumeSpecName: "config-data") pod "41c9f1d3-a870-4b2f-bc60-e2a13d520664" (UID: "41c9f1d3-a870-4b2f-bc60-e2a13d520664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.024741 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.025043 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c9f1d3-a870-4b2f-bc60-e2a13d520664-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.025057 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fzts\" (UniqueName: \"kubernetes.io/projected/41c9f1d3-a870-4b2f-bc60-e2a13d520664-kube-api-access-6fzts\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.301264 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.430760 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt98x\" (UniqueName: \"kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x\") pod \"e85c5102-a753-4ad3-9105-8d3071189381\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.431095 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs\") pod \"e85c5102-a753-4ad3-9105-8d3071189381\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.431164 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle\") pod \"e85c5102-a753-4ad3-9105-8d3071189381\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.431200 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data\") pod \"e85c5102-a753-4ad3-9105-8d3071189381\" (UID: \"e85c5102-a753-4ad3-9105-8d3071189381\") " Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.431751 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs" (OuterVolumeSpecName: "logs") pod "e85c5102-a753-4ad3-9105-8d3071189381" (UID: "e85c5102-a753-4ad3-9105-8d3071189381"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.435331 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x" (OuterVolumeSpecName: "kube-api-access-pt98x") pod "e85c5102-a753-4ad3-9105-8d3071189381" (UID: "e85c5102-a753-4ad3-9105-8d3071189381"). InnerVolumeSpecName "kube-api-access-pt98x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.461233 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data" (OuterVolumeSpecName: "config-data") pod "e85c5102-a753-4ad3-9105-8d3071189381" (UID: "e85c5102-a753-4ad3-9105-8d3071189381"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.470501 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.482330 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e85c5102-a753-4ad3-9105-8d3071189381" (UID: "e85c5102-a753-4ad3-9105-8d3071189381"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.483339 4979 generic.go:334] "Generic (PLEG): container finished" podID="e85c5102-a753-4ad3-9105-8d3071189381" containerID="65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283" exitCode=0 Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.483432 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerDied","Data":"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283"} Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.483479 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e85c5102-a753-4ad3-9105-8d3071189381","Type":"ContainerDied","Data":"6f704b76cb04ca4ca5094c825c5095f81455be20961dca8514abd07c7261665e"} Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.483508 4979 scope.go:117] "RemoveContainer" containerID="65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.483654 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.501208 4979 generic.go:334] "Generic (PLEG): container finished" podID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" containerID="c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74" exitCode=0 Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.501254 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41c9f1d3-a870-4b2f-bc60-e2a13d520664","Type":"ContainerDied","Data":"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74"} Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.501280 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41c9f1d3-a870-4b2f-bc60-e2a13d520664","Type":"ContainerDied","Data":"24859710f66428c4a027d6fe270a53ad113ef34701af4e49c0218976f8414555"} Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.501329 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.531010 4979 scope.go:117] "RemoveContainer" containerID="49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.533096 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.533125 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85c5102-a753-4ad3-9105-8d3071189381-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.533140 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt98x\" (UniqueName: \"kubernetes.io/projected/e85c5102-a753-4ad3-9105-8d3071189381-kube-api-access-pt98x\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.533154 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85c5102-a753-4ad3-9105-8d3071189381-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.537151 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.555025 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.560380 4979 scope.go:117] "RemoveContainer" containerID="65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283" Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.565994 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283\": container with ID starting with 65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283 not found: ID does not exist" containerID="65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.566054 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283"} err="failed to get container status \"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283\": rpc error: code = NotFound desc = could not find container \"65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283\": container with ID starting with 65bec9e730d78801c004c0aab5fc3bac35f7d6c77eb1eb299e062594f542b283 not found: ID does not exist" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.566078 4979 scope.go:117] "RemoveContainer" containerID="49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6" Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.568712 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6\": container with ID starting with 49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6 not found: ID does not exist" containerID="49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.568754 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6"} err="failed to get container status \"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6\": rpc error: code = NotFound desc = could not find container \"49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6\": container with ID starting with 49566312a7f7db7745c0770cce0dc361210133d5f2c159d44829b03aec3644d6 not found: ID does not exist" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.568779 4979 scope.go:117] "RemoveContainer" containerID="c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.582182 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.592696 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.593134 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-api" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593151 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-api" Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.593165 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-log" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593176 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-log" Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.593192 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" containerName="nova-scheduler-scheduler" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593199 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" containerName="nova-scheduler-scheduler" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593376 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-api" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593394 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" containerName="nova-scheduler-scheduler" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.593404 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85c5102-a753-4ad3-9105-8d3071189381" containerName="nova-api-log" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.594380 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.596082 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.621115 4979 scope.go:117] "RemoveContainer" containerID="c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74" Jan 30 23:14:42 crc kubenswrapper[4979]: E0130 23:14:42.621550 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74\": container with ID starting with c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74 not found: ID does not exist" containerID="c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.621589 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74"} err="failed to get container status \"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74\": rpc error: code = NotFound desc = could not find container \"c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74\": container with ID starting with c7d5ca19e33dc2e5e25d511673fb63dd543b73dd876e4ee01d3d09e328e7bf74 not found: ID does not exist" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.623073 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.632672 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.640171 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.641826 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.643723 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.647742 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.738705 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnbdt\" (UniqueName: \"kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.738827 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.738874 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.738904 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.738936 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.739008 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm2d4\" (UniqueName: \"kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.739069 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.840853 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm2d4\" (UniqueName: \"kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.840915 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.840984 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnbdt\" (UniqueName: \"kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.841060 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.841098 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.841116 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.841139 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.841670 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.847825 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.847825 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.847918 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.849572 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.862713 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm2d4\" (UniqueName: \"kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4\") pod \"nova-scheduler-0\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " pod="openstack/nova-scheduler-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.862883 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnbdt\" (UniqueName: \"kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt\") pod \"nova-api-0\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.930391 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:14:42 crc kubenswrapper[4979]: I0130 23:14:42.961589 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.097429 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c9f1d3-a870-4b2f-bc60-e2a13d520664" path="/var/lib/kubelet/pods/41c9f1d3-a870-4b2f-bc60-e2a13d520664/volumes" Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.098157 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85c5102-a753-4ad3-9105-8d3071189381" path="/var/lib/kubelet/pods/e85c5102-a753-4ad3-9105-8d3071189381/volumes" Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.098701 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb82300-1ad1-4a3e-aba6-3635e79512a7" path="/var/lib/kubelet/pods/ecb82300-1ad1-4a3e-aba6-3635e79512a7/volumes" Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.441945 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:14:43 crc kubenswrapper[4979]: W0130 23:14:43.441974 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a89116a_a8b5_4bf6_8e13_ec81f6b7a8c6.slice/crio-fccc7af6d03a24335474c436786fa98ab8e787f6f9254489aeae84b9f43ab229 WatchSource:0}: Error finding container fccc7af6d03a24335474c436786fa98ab8e787f6f9254489aeae84b9f43ab229: Status 404 returned error can't find the container with id fccc7af6d03a24335474c436786fa98ab8e787f6f9254489aeae84b9f43ab229 Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.623426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerStarted","Data":"fccc7af6d03a24335474c436786fa98ab8e787f6f9254489aeae84b9f43ab229"} Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.630477 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerStarted","Data":"679b3058f3f485f291f252eaf3bb8918f69b6e3a441b2d5608224e19d4b90456"} Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.630565 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerStarted","Data":"7bb8706f925ad6381a16e97222bca04fa77399d32e5cbac62e5ced735b44c617"} Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.630580 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerStarted","Data":"9cc4bcac87294ecc35a0ba1173d943f288303feac35baadc83a45c516f8e77dd"} Jan 30 23:14:43 crc kubenswrapper[4979]: I0130 23:14:43.646173 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.641557 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0607a76-8412-4547-945c-f5672e9516f8","Type":"ContainerStarted","Data":"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b"} Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.642340 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0607a76-8412-4547-945c-f5672e9516f8","Type":"ContainerStarted","Data":"97dcc53461a420d86c88ad2c9e5439b13ea4d32d8913b4f9be12a15f52d97f4b"} Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.645931 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerStarted","Data":"15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a"} Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.645990 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerStarted","Data":"abc237f618c829fbfeb4a9a4ef23c8e778e88ceedcbdce08bd22dead984035c4"} Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.665674 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.6656508089999997 podStartE2EDuration="3.665650809s" podCreationTimestamp="2026-01-30 23:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:43.658678921 +0000 UTC m=+5679.619925954" watchObservedRunningTime="2026-01-30 23:14:44.665650809 +0000 UTC m=+5680.626897842" Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.670433 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.670409346 podStartE2EDuration="2.670409346s" podCreationTimestamp="2026-01-30 23:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:44.658344713 +0000 UTC m=+5680.619591836" watchObservedRunningTime="2026-01-30 23:14:44.670409346 +0000 UTC m=+5680.631656379" Jan 30 23:14:44 crc kubenswrapper[4979]: I0130 23:14:44.694284 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.694246927 podStartE2EDuration="2.694246927s" podCreationTimestamp="2026-01-30 23:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:14:44.690541358 +0000 UTC m=+5680.651788431" watchObservedRunningTime="2026-01-30 23:14:44.694246927 +0000 UTC m=+5680.655493960" Jan 30 23:14:46 crc kubenswrapper[4979]: I0130 23:14:46.932386 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:46 crc kubenswrapper[4979]: I0130 23:14:46.932662 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:14:47 crc kubenswrapper[4979]: I0130 23:14:47.961690 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 23:14:51 crc kubenswrapper[4979]: I0130 23:14:51.933765 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:14:51 crc kubenswrapper[4979]: I0130 23:14:51.934345 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:14:52 crc kubenswrapper[4979]: I0130 23:14:52.932827 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:14:52 crc kubenswrapper[4979]: I0130 23:14:52.932890 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:14:52 crc kubenswrapper[4979]: I0130 23:14:52.962364 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 23:14:52 crc kubenswrapper[4979]: I0130 23:14:52.973246 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:52 crc kubenswrapper[4979]: I0130 23:14:52.986681 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 23:14:53 crc kubenswrapper[4979]: I0130 23:14:53.014265 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:53 crc kubenswrapper[4979]: I0130 23:14:53.762023 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 23:14:54 crc kubenswrapper[4979]: I0130 23:14:54.015190 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:14:54 crc kubenswrapper[4979]: I0130 23:14:54.015212 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.151975 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj"] Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.153555 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.155299 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.155661 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.170677 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj"] Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.214472 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.214556 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.214617 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctz86\" (UniqueName: \"kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.316049 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.316166 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.316235 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctz86\" (UniqueName: \"kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.317337 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.323046 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.335134 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctz86\" (UniqueName: \"kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86\") pod \"collect-profiles-29496915-mdgwj\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.474280 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:00 crc kubenswrapper[4979]: W0130 23:15:00.933564 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod654a24ec_64c8_42fb_8ec0_f4be5297d71b.slice/crio-03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e WatchSource:0}: Error finding container 03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e: Status 404 returned error can't find the container with id 03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e Jan 30 23:15:00 crc kubenswrapper[4979]: I0130 23:15:00.940229 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj"] Jan 30 23:15:01 crc kubenswrapper[4979]: I0130 23:15:01.810987 4979 generic.go:334] "Generic (PLEG): container finished" podID="654a24ec-64c8-42fb-8ec0-f4be5297d71b" containerID="eca7942dd84fb6210abbf472b1d3e584f769d90b602f1eb132be7480230768be" exitCode=0 Jan 30 23:15:01 crc kubenswrapper[4979]: I0130 23:15:01.811111 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" event={"ID":"654a24ec-64c8-42fb-8ec0-f4be5297d71b","Type":"ContainerDied","Data":"eca7942dd84fb6210abbf472b1d3e584f769d90b602f1eb132be7480230768be"} Jan 30 23:15:01 crc kubenswrapper[4979]: I0130 23:15:01.812209 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" event={"ID":"654a24ec-64c8-42fb-8ec0-f4be5297d71b","Type":"ContainerStarted","Data":"03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e"} Jan 30 23:15:01 crc kubenswrapper[4979]: I0130 23:15:01.964743 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 23:15:01 crc kubenswrapper[4979]: I0130 23:15:01.965534 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.047142 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.047193 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.047230 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.047879 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.047937 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" gracePeriod=600 Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.093471 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 23:15:02 crc kubenswrapper[4979]: E0130 23:15:02.181733 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.829367 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" exitCode=0 Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.829417 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f"} Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.829733 4979 scope.go:117] "RemoveContainer" containerID="94f5c7990b2576813cfa39ef85f902f7a75770e6c04a43bd1848309b7c39ad19" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.830614 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:15:02 crc kubenswrapper[4979]: E0130 23:15:02.830948 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.834897 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.939118 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.939895 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.940154 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 23:15:02 crc kubenswrapper[4979]: I0130 23:15:02.944160 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.209162 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.289864 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume\") pod \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.290002 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctz86\" (UniqueName: \"kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86\") pod \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.290195 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume\") pod \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\" (UID: \"654a24ec-64c8-42fb-8ec0-f4be5297d71b\") " Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.290912 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume" (OuterVolumeSpecName: "config-volume") pod "654a24ec-64c8-42fb-8ec0-f4be5297d71b" (UID: "654a24ec-64c8-42fb-8ec0-f4be5297d71b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.297239 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "654a24ec-64c8-42fb-8ec0-f4be5297d71b" (UID: "654a24ec-64c8-42fb-8ec0-f4be5297d71b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.302737 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86" (OuterVolumeSpecName: "kube-api-access-ctz86") pod "654a24ec-64c8-42fb-8ec0-f4be5297d71b" (UID: "654a24ec-64c8-42fb-8ec0-f4be5297d71b"). InnerVolumeSpecName "kube-api-access-ctz86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.392595 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/654a24ec-64c8-42fb-8ec0-f4be5297d71b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.392631 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654a24ec-64c8-42fb-8ec0-f4be5297d71b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.392643 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctz86\" (UniqueName: \"kubernetes.io/projected/654a24ec-64c8-42fb-8ec0-f4be5297d71b-kube-api-access-ctz86\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.840668 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.840732 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496915-mdgwj" event={"ID":"654a24ec-64c8-42fb-8ec0-f4be5297d71b","Type":"ContainerDied","Data":"03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e"} Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.841792 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03649f8c7a9c9169485b3b14d2f701f658dfd8439722cf9394ababbefafe770e" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.844668 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 23:15:03 crc kubenswrapper[4979]: I0130 23:15:03.848437 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.050122 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:04 crc kubenswrapper[4979]: E0130 23:15:04.050517 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654a24ec-64c8-42fb-8ec0-f4be5297d71b" containerName="collect-profiles" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.050538 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="654a24ec-64c8-42fb-8ec0-f4be5297d71b" containerName="collect-profiles" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.050711 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="654a24ec-64c8-42fb-8ec0-f4be5297d71b" containerName="collect-profiles" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.051637 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.101777 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.209876 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.210068 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.210131 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjpk8\" (UniqueName: \"kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.210269 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.210563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.286506 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x"] Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.305449 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-drq2x"] Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.312367 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.312417 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.312440 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjpk8\" (UniqueName: \"kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.312481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.312514 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.313323 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.313829 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.314346 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.315115 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.332071 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjpk8\" (UniqueName: \"kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8\") pod \"dnsmasq-dns-b676b66fc-rxm7v\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.381086 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.858779 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:04 crc kubenswrapper[4979]: I0130 23:15:04.876344 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" event={"ID":"62a96508-72cd-4ec2-979e-e32ed0ee4aa0","Type":"ContainerStarted","Data":"915cecbe9c6901c7d7d431835fc9e7decfca7d60cbb868415d35c96f21bde0b8"} Jan 30 23:15:05 crc kubenswrapper[4979]: I0130 23:15:05.078824 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03" path="/var/lib/kubelet/pods/9ca3fc2f-d3ff-4e0e-b9b3-9612aba84a03/volumes" Jan 30 23:15:05 crc kubenswrapper[4979]: I0130 23:15:05.885793 4979 generic.go:334] "Generic (PLEG): container finished" podID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerID="768d5b79edb701e759cd2a0fc62def57f1157b7fce4a7f7fc9d3ff38886f5e98" exitCode=0 Jan 30 23:15:05 crc kubenswrapper[4979]: I0130 23:15:05.885849 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" event={"ID":"62a96508-72cd-4ec2-979e-e32ed0ee4aa0","Type":"ContainerDied","Data":"768d5b79edb701e759cd2a0fc62def57f1157b7fce4a7f7fc9d3ff38886f5e98"} Jan 30 23:15:06 crc kubenswrapper[4979]: I0130 23:15:06.895494 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" event={"ID":"62a96508-72cd-4ec2-979e-e32ed0ee4aa0","Type":"ContainerStarted","Data":"3bd53a6c84610f3c6412de74fc9d366392b6446b2c3ad18083828e64ec458fa2"} Jan 30 23:15:06 crc kubenswrapper[4979]: I0130 23:15:06.895725 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:06 crc kubenswrapper[4979]: I0130 23:15:06.912165 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" podStartSLOduration=2.912148917 podStartE2EDuration="2.912148917s" podCreationTimestamp="2026-01-30 23:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:06.908684582 +0000 UTC m=+5702.869931615" watchObservedRunningTime="2026-01-30 23:15:06.912148917 +0000 UTC m=+5702.873395950" Jan 30 23:15:14 crc kubenswrapper[4979]: I0130 23:15:14.382592 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:14 crc kubenswrapper[4979]: I0130 23:15:14.450248 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:15:14 crc kubenswrapper[4979]: I0130 23:15:14.450494 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="dnsmasq-dns" containerID="cri-o://22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75" gracePeriod=10 Jan 30 23:15:14 crc kubenswrapper[4979]: E0130 23:15:14.642487 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod065e25fc_286f_4759_9430_a918818caeae.slice/crio-conmon-22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod065e25fc_286f_4759_9430_a918818caeae.slice/crio-22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:15:14 crc kubenswrapper[4979]: I0130 23:15:14.954497 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.000552 4979 generic.go:334] "Generic (PLEG): container finished" podID="065e25fc-286f-4759-9430-a918818caeae" containerID="22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75" exitCode=0 Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.000595 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" event={"ID":"065e25fc-286f-4759-9430-a918818caeae","Type":"ContainerDied","Data":"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75"} Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.000611 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.000633 4979 scope.go:117] "RemoveContainer" containerID="22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.000623 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd456f9c9-9bcrr" event={"ID":"065e25fc-286f-4759-9430-a918818caeae","Type":"ContainerDied","Data":"ebd3dade926a167983d467980b49120885f0e096bd8d71d96bc62f48fd9a4976"} Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.020738 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc\") pod \"065e25fc-286f-4759-9430-a918818caeae\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.020793 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb\") pod \"065e25fc-286f-4759-9430-a918818caeae\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.020811 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config\") pod \"065e25fc-286f-4759-9430-a918818caeae\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.020833 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skvhb\" (UniqueName: \"kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb\") pod \"065e25fc-286f-4759-9430-a918818caeae\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.020928 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb\") pod \"065e25fc-286f-4759-9430-a918818caeae\" (UID: \"065e25fc-286f-4759-9430-a918818caeae\") " Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.033079 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb" (OuterVolumeSpecName: "kube-api-access-skvhb") pod "065e25fc-286f-4759-9430-a918818caeae" (UID: "065e25fc-286f-4759-9430-a918818caeae"). InnerVolumeSpecName "kube-api-access-skvhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.034904 4979 scope.go:117] "RemoveContainer" containerID="fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.076703 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "065e25fc-286f-4759-9430-a918818caeae" (UID: "065e25fc-286f-4759-9430-a918818caeae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.076765 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "065e25fc-286f-4759-9430-a918818caeae" (UID: "065e25fc-286f-4759-9430-a918818caeae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.078313 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:15:15 crc kubenswrapper[4979]: E0130 23:15:15.079003 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.085525 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config" (OuterVolumeSpecName: "config") pod "065e25fc-286f-4759-9430-a918818caeae" (UID: "065e25fc-286f-4759-9430-a918818caeae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.107709 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "065e25fc-286f-4759-9430-a918818caeae" (UID: "065e25fc-286f-4759-9430-a918818caeae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.119453 4979 scope.go:117] "RemoveContainer" containerID="22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75" Jan 30 23:15:15 crc kubenswrapper[4979]: E0130 23:15:15.122670 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75\": container with ID starting with 22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75 not found: ID does not exist" containerID="22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.122711 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75"} err="failed to get container status \"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75\": rpc error: code = NotFound desc = could not find container \"22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75\": container with ID starting with 22d44eaa0f2c8473ce54e2325d71d496f93ec9db51efee105dfb0f0d0f4f0d75 not found: ID does not exist" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.122734 4979 scope.go:117] "RemoveContainer" containerID="fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701" Jan 30 23:15:15 crc kubenswrapper[4979]: E0130 23:15:15.122966 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701\": container with ID starting with fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701 not found: ID does not exist" containerID="fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.122984 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701"} err="failed to get container status \"fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701\": rpc error: code = NotFound desc = could not find container \"fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701\": container with ID starting with fa341355af23fe18471443b57b1a4ae52ee143cfc4f795f134161c22db96e701 not found: ID does not exist" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.129886 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.129925 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.129938 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.129950 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065e25fc-286f-4759-9430-a918818caeae-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.129962 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skvhb\" (UniqueName: \"kubernetes.io/projected/065e25fc-286f-4759-9430-a918818caeae-kube-api-access-skvhb\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.330479 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:15:15 crc kubenswrapper[4979]: I0130 23:15:15.339567 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dd456f9c9-9bcrr"] Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.046652 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dfcbh"] Jan 30 23:15:17 crc kubenswrapper[4979]: E0130 23:15:17.047127 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="init" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.047144 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="init" Jan 30 23:15:17 crc kubenswrapper[4979]: E0130 23:15:17.047158 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="dnsmasq-dns" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.047165 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="dnsmasq-dns" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.047392 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="065e25fc-286f-4759-9430-a918818caeae" containerName="dnsmasq-dns" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.048254 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.107594 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065e25fc-286f-4759-9430-a918818caeae" path="/var/lib/kubelet/pods/065e25fc-286f-4759-9430-a918818caeae/volumes" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.108424 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dfcbh"] Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.151280 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7719-account-create-update-h5jpn"] Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.152665 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.155651 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.165457 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7719-account-create-update-h5jpn"] Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.199867 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdw4\" (UniqueName: \"kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.200496 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.301981 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.302392 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqdw4\" (UniqueName: \"kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.302415 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.302472 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr698\" (UniqueName: \"kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.303113 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.321765 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqdw4\" (UniqueName: \"kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4\") pod \"cinder-db-create-dfcbh\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.404010 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.404222 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr698\" (UniqueName: \"kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.404779 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.412461 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.420771 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr698\" (UniqueName: \"kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698\") pod \"cinder-7719-account-create-update-h5jpn\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:17 crc kubenswrapper[4979]: I0130 23:15:17.470329 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:18 crc kubenswrapper[4979]: I0130 23:15:17.861809 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dfcbh"] Jan 30 23:15:18 crc kubenswrapper[4979]: W0130 23:15:17.861988 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b7f12a_3ae2_43d3_83d8_ea5ac1439aed.slice/crio-6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c WatchSource:0}: Error finding container 6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c: Status 404 returned error can't find the container with id 6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c Jan 30 23:15:18 crc kubenswrapper[4979]: I0130 23:15:18.029086 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dfcbh" event={"ID":"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed","Type":"ContainerStarted","Data":"6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c"} Jan 30 23:15:18 crc kubenswrapper[4979]: I0130 23:15:18.679178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7719-account-create-update-h5jpn"] Jan 30 23:15:18 crc kubenswrapper[4979]: W0130 23:15:18.682284 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9737fb48_932e_4216_a323_0fa11a0a136d.slice/crio-00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a WatchSource:0}: Error finding container 00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a: Status 404 returned error can't find the container with id 00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a Jan 30 23:15:19 crc kubenswrapper[4979]: I0130 23:15:19.039555 4979 generic.go:334] "Generic (PLEG): container finished" podID="9737fb48-932e-4216-a323-0fa11a0a136d" containerID="d7d84d9b6f642570ec9f0833c3f37b449071bcc3ab74fb1efbfc67cb25be27a7" exitCode=0 Jan 30 23:15:19 crc kubenswrapper[4979]: I0130 23:15:19.039697 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7719-account-create-update-h5jpn" event={"ID":"9737fb48-932e-4216-a323-0fa11a0a136d","Type":"ContainerDied","Data":"d7d84d9b6f642570ec9f0833c3f37b449071bcc3ab74fb1efbfc67cb25be27a7"} Jan 30 23:15:19 crc kubenswrapper[4979]: I0130 23:15:19.040053 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7719-account-create-update-h5jpn" event={"ID":"9737fb48-932e-4216-a323-0fa11a0a136d","Type":"ContainerStarted","Data":"00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a"} Jan 30 23:15:19 crc kubenswrapper[4979]: I0130 23:15:19.041774 4979 generic.go:334] "Generic (PLEG): container finished" podID="87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" containerID="b2aed671841955c62444becfeabff7ccb5bcd0fdccfa5d1f4e24c893f848c58c" exitCode=0 Jan 30 23:15:19 crc kubenswrapper[4979]: I0130 23:15:19.041821 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dfcbh" event={"ID":"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed","Type":"ContainerDied","Data":"b2aed671841955c62444becfeabff7ccb5bcd0fdccfa5d1f4e24c893f848c58c"} Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.483901 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.491227 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.558531 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts\") pod \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.558603 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqdw4\" (UniqueName: \"kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4\") pod \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\" (UID: \"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed\") " Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.559595 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" (UID: "87b7f12a-3ae2-43d3-83d8-ea5ac1439aed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.569471 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4" (OuterVolumeSpecName: "kube-api-access-cqdw4") pod "87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" (UID: "87b7f12a-3ae2-43d3-83d8-ea5ac1439aed"). InnerVolumeSpecName "kube-api-access-cqdw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.660435 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts\") pod \"9737fb48-932e-4216-a323-0fa11a0a136d\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.660491 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr698\" (UniqueName: \"kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698\") pod \"9737fb48-932e-4216-a323-0fa11a0a136d\" (UID: \"9737fb48-932e-4216-a323-0fa11a0a136d\") " Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.660844 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.660861 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqdw4\" (UniqueName: \"kubernetes.io/projected/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed-kube-api-access-cqdw4\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.663556 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9737fb48-932e-4216-a323-0fa11a0a136d" (UID: "9737fb48-932e-4216-a323-0fa11a0a136d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.667254 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698" (OuterVolumeSpecName: "kube-api-access-wr698") pod "9737fb48-932e-4216-a323-0fa11a0a136d" (UID: "9737fb48-932e-4216-a323-0fa11a0a136d"). InnerVolumeSpecName "kube-api-access-wr698". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.762966 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9737fb48-932e-4216-a323-0fa11a0a136d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:20 crc kubenswrapper[4979]: I0130 23:15:20.763001 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr698\" (UniqueName: \"kubernetes.io/projected/9737fb48-932e-4216-a323-0fa11a0a136d-kube-api-access-wr698\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.060698 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7719-account-create-update-h5jpn" Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.060683 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7719-account-create-update-h5jpn" event={"ID":"9737fb48-932e-4216-a323-0fa11a0a136d","Type":"ContainerDied","Data":"00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a"} Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.060835 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e85d83ee05f320b1d4cf36b38d02db24f3ebeebcf0184ac6f4ff5bfb44cd4a" Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.061934 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dfcbh" event={"ID":"87b7f12a-3ae2-43d3-83d8-ea5ac1439aed","Type":"ContainerDied","Data":"6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c"} Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.061971 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b1be56ab235bb231a13d969d9e8251d751a5f6f1ef5bf0aa70d8bea83195e0c" Jan 30 23:15:21 crc kubenswrapper[4979]: I0130 23:15:21.061985 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dfcbh" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.548879 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-x8rfx"] Jan 30 23:15:22 crc kubenswrapper[4979]: E0130 23:15:22.549605 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9737fb48-932e-4216-a323-0fa11a0a136d" containerName="mariadb-account-create-update" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.549617 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9737fb48-932e-4216-a323-0fa11a0a136d" containerName="mariadb-account-create-update" Jan 30 23:15:22 crc kubenswrapper[4979]: E0130 23:15:22.549637 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" containerName="mariadb-database-create" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.549643 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" containerName="mariadb-database-create" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.549850 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" containerName="mariadb-database-create" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.549869 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9737fb48-932e-4216-a323-0fa11a0a136d" containerName="mariadb-account-create-update" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.550500 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.559464 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x8rfx"] Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.559581 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.559716 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.560309 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4jjsh" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698724 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698796 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqqf\" (UniqueName: \"kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698862 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698885 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698920 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.698945 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800290 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800347 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800427 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800457 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqqf\" (UniqueName: \"kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800497 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800516 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.800584 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.805637 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.811254 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.811696 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.812575 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.821341 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqqf\" (UniqueName: \"kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf\") pod \"cinder-db-sync-x8rfx\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:22 crc kubenswrapper[4979]: I0130 23:15:22.882438 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:23 crc kubenswrapper[4979]: I0130 23:15:23.328692 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x8rfx"] Jan 30 23:15:24 crc kubenswrapper[4979]: I0130 23:15:24.094057 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8rfx" event={"ID":"f36c73f1-9737-467c-a014-5ac45eb3f512","Type":"ContainerStarted","Data":"41cdb0291361e0a8365a54c79470d747f6d9eb9bfc7ccb69ab8969a4d5853007"} Jan 30 23:15:24 crc kubenswrapper[4979]: I0130 23:15:24.094473 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8rfx" event={"ID":"f36c73f1-9737-467c-a014-5ac45eb3f512","Type":"ContainerStarted","Data":"b9541c5802fcf035cfc55841001b2271cfaa6f01741f050b6430e48441fab1f9"} Jan 30 23:15:24 crc kubenswrapper[4979]: I0130 23:15:24.113143 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-x8rfx" podStartSLOduration=2.113068847 podStartE2EDuration="2.113068847s" podCreationTimestamp="2026-01-30 23:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:24.112237564 +0000 UTC m=+5720.073484597" watchObservedRunningTime="2026-01-30 23:15:24.113068847 +0000 UTC m=+5720.074315880" Jan 30 23:15:26 crc kubenswrapper[4979]: I0130 23:15:26.069935 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:15:26 crc kubenswrapper[4979]: E0130 23:15:26.070996 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:26 crc kubenswrapper[4979]: I0130 23:15:26.976281 4979 scope.go:117] "RemoveContainer" containerID="06f1c39be4f79a10738471e24d46871dad22c8321fde40d1075b882f27317030" Jan 30 23:15:27 crc kubenswrapper[4979]: I0130 23:15:27.124331 4979 generic.go:334] "Generic (PLEG): container finished" podID="f36c73f1-9737-467c-a014-5ac45eb3f512" containerID="41cdb0291361e0a8365a54c79470d747f6d9eb9bfc7ccb69ab8969a4d5853007" exitCode=0 Jan 30 23:15:27 crc kubenswrapper[4979]: I0130 23:15:27.124373 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8rfx" event={"ID":"f36c73f1-9737-467c-a014-5ac45eb3f512","Type":"ContainerDied","Data":"41cdb0291361e0a8365a54c79470d747f6d9eb9bfc7ccb69ab8969a4d5853007"} Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.483671 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606538 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606604 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606681 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606702 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606734 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.606758 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkqqf\" (UniqueName: \"kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf\") pod \"f36c73f1-9737-467c-a014-5ac45eb3f512\" (UID: \"f36c73f1-9737-467c-a014-5ac45eb3f512\") " Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.607488 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.613412 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.613760 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf" (OuterVolumeSpecName: "kube-api-access-vkqqf") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "kube-api-access-vkqqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.626224 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts" (OuterVolumeSpecName: "scripts") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.635195 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.671132 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data" (OuterVolumeSpecName: "config-data") pod "f36c73f1-9737-467c-a014-5ac45eb3f512" (UID: "f36c73f1-9737-467c-a014-5ac45eb3f512"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708632 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708675 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f36c73f1-9737-467c-a014-5ac45eb3f512-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708689 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708702 4979 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708714 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36c73f1-9737-467c-a014-5ac45eb3f512-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:28 crc kubenswrapper[4979]: I0130 23:15:28.708726 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkqqf\" (UniqueName: \"kubernetes.io/projected/f36c73f1-9737-467c-a014-5ac45eb3f512-kube-api-access-vkqqf\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.142170 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8rfx" event={"ID":"f36c73f1-9737-467c-a014-5ac45eb3f512","Type":"ContainerDied","Data":"b9541c5802fcf035cfc55841001b2271cfaa6f01741f050b6430e48441fab1f9"} Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.142203 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8rfx" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.142208 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9541c5802fcf035cfc55841001b2271cfaa6f01741f050b6430e48441fab1f9" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.498143 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-689759d469-jqhxp"] Jan 30 23:15:29 crc kubenswrapper[4979]: E0130 23:15:29.498928 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36c73f1-9737-467c-a014-5ac45eb3f512" containerName="cinder-db-sync" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.498943 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36c73f1-9737-467c-a014-5ac45eb3f512" containerName="cinder-db-sync" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.499194 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f36c73f1-9737-467c-a014-5ac45eb3f512" containerName="cinder-db-sync" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.500265 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.521072 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689759d469-jqhxp"] Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.652635 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-sb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.652722 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-nb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.652828 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-config\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.652913 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-dns-svc\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.652958 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4h67\" (UniqueName: \"kubernetes.io/projected/d2693393-b0b5-4009-9c45-80d154fa756c-kube-api-access-c4h67\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.743274 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.744714 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.748501 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.748918 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.751275 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4jjsh" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.752550 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.753947 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.754053 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-dns-svc\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.754099 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4h67\" (UniqueName: \"kubernetes.io/projected/d2693393-b0b5-4009-9c45-80d154fa756c-kube-api-access-c4h67\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.754154 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-sb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.754198 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-nb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.754248 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-config\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.755139 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-config\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.755165 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-nb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.755457 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-dns-svc\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.755657 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2693393-b0b5-4009-9c45-80d154fa756c-ovsdbserver-sb\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.785976 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4h67\" (UniqueName: \"kubernetes.io/projected/d2693393-b0b5-4009-9c45-80d154fa756c-kube-api-access-c4h67\") pod \"dnsmasq-dns-689759d469-jqhxp\" (UID: \"d2693393-b0b5-4009-9c45-80d154fa756c\") " pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.824511 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856206 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnld\" (UniqueName: \"kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856265 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856327 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856365 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856889 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.856927 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.959394 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.959775 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.959804 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.959844 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.960252 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.960311 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnld\" (UniqueName: \"kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.960372 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.959625 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.963475 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.964383 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.970449 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.972107 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.989519 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnld\" (UniqueName: \"kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:29 crc kubenswrapper[4979]: I0130 23:15:29.995144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data\") pod \"cinder-api-0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " pod="openstack/cinder-api-0" Jan 30 23:15:30 crc kubenswrapper[4979]: I0130 23:15:30.064867 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:15:30 crc kubenswrapper[4979]: I0130 23:15:30.369178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689759d469-jqhxp"] Jan 30 23:15:30 crc kubenswrapper[4979]: I0130 23:15:30.524131 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:15:30 crc kubenswrapper[4979]: W0130 23:15:30.528167 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5c27922_b152_465f_b0fe_117e336c7ae0.slice/crio-5008657448c2eb55e18f087e3d982962674118c540ee915b230deaa21dab8bb1 WatchSource:0}: Error finding container 5008657448c2eb55e18f087e3d982962674118c540ee915b230deaa21dab8bb1: Status 404 returned error can't find the container with id 5008657448c2eb55e18f087e3d982962674118c540ee915b230deaa21dab8bb1 Jan 30 23:15:31 crc kubenswrapper[4979]: I0130 23:15:31.182863 4979 generic.go:334] "Generic (PLEG): container finished" podID="d2693393-b0b5-4009-9c45-80d154fa756c" containerID="c074709a0faa2cb5220ed986496fe6c89e9146b920a08c2bb0db74a260346281" exitCode=0 Jan 30 23:15:31 crc kubenswrapper[4979]: I0130 23:15:31.183263 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689759d469-jqhxp" event={"ID":"d2693393-b0b5-4009-9c45-80d154fa756c","Type":"ContainerDied","Data":"c074709a0faa2cb5220ed986496fe6c89e9146b920a08c2bb0db74a260346281"} Jan 30 23:15:31 crc kubenswrapper[4979]: I0130 23:15:31.183299 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689759d469-jqhxp" event={"ID":"d2693393-b0b5-4009-9c45-80d154fa756c","Type":"ContainerStarted","Data":"9b7617c198745bcfa434cf6f8a128bc1ff779ad2c3e34b45f53e616e03af51f2"} Jan 30 23:15:31 crc kubenswrapper[4979]: I0130 23:15:31.199715 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerStarted","Data":"436b6e4f92a01386ad3771816421209270d94a742621f8204dcf3dd212a924ec"} Jan 30 23:15:31 crc kubenswrapper[4979]: I0130 23:15:31.199763 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerStarted","Data":"5008657448c2eb55e18f087e3d982962674118c540ee915b230deaa21dab8bb1"} Jan 30 23:15:32 crc kubenswrapper[4979]: I0130 23:15:32.210612 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689759d469-jqhxp" event={"ID":"d2693393-b0b5-4009-9c45-80d154fa756c","Type":"ContainerStarted","Data":"ca77b320765f5f31e48b17b88339066c574f39f9bd13db2b1a8524882f0f78e3"} Jan 30 23:15:32 crc kubenswrapper[4979]: I0130 23:15:32.211593 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:32 crc kubenswrapper[4979]: I0130 23:15:32.212601 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerStarted","Data":"e76ef2119f87eb1d394842edac3278d0622498b6594da66e394b4ae5f6cc97f2"} Jan 30 23:15:32 crc kubenswrapper[4979]: I0130 23:15:32.212801 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 23:15:32 crc kubenswrapper[4979]: I0130 23:15:32.234593 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-689759d469-jqhxp" podStartSLOduration=3.234576662 podStartE2EDuration="3.234576662s" podCreationTimestamp="2026-01-30 23:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:32.227996545 +0000 UTC m=+5728.189243578" watchObservedRunningTime="2026-01-30 23:15:32.234576662 +0000 UTC m=+5728.195823685" Jan 30 23:15:39 crc kubenswrapper[4979]: I0130 23:15:39.826281 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-689759d469-jqhxp" Jan 30 23:15:39 crc kubenswrapper[4979]: I0130 23:15:39.857556 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.857538203 podStartE2EDuration="10.857538203s" podCreationTimestamp="2026-01-30 23:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:32.260588687 +0000 UTC m=+5728.221835720" watchObservedRunningTime="2026-01-30 23:15:39.857538203 +0000 UTC m=+5735.818785236" Jan 30 23:15:39 crc kubenswrapper[4979]: I0130 23:15:39.889826 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:39 crc kubenswrapper[4979]: I0130 23:15:39.890083 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="dnsmasq-dns" containerID="cri-o://3bd53a6c84610f3c6412de74fc9d366392b6446b2c3ad18083828e64ec458fa2" gracePeriod=10 Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.070009 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:15:40 crc kubenswrapper[4979]: E0130 23:15:40.070421 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.294008 4979 generic.go:334] "Generic (PLEG): container finished" podID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerID="3bd53a6c84610f3c6412de74fc9d366392b6446b2c3ad18083828e64ec458fa2" exitCode=0 Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.294461 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" event={"ID":"62a96508-72cd-4ec2-979e-e32ed0ee4aa0","Type":"ContainerDied","Data":"3bd53a6c84610f3c6412de74fc9d366392b6446b2c3ad18083828e64ec458fa2"} Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.377603 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.471856 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb\") pod \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.472239 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjpk8\" (UniqueName: \"kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8\") pod \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.472395 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc\") pod \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.472625 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config\") pod \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.472734 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb\") pod \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\" (UID: \"62a96508-72cd-4ec2-979e-e32ed0ee4aa0\") " Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.492433 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8" (OuterVolumeSpecName: "kube-api-access-vjpk8") pod "62a96508-72cd-4ec2-979e-e32ed0ee4aa0" (UID: "62a96508-72cd-4ec2-979e-e32ed0ee4aa0"). InnerVolumeSpecName "kube-api-access-vjpk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.533112 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "62a96508-72cd-4ec2-979e-e32ed0ee4aa0" (UID: "62a96508-72cd-4ec2-979e-e32ed0ee4aa0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.542442 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "62a96508-72cd-4ec2-979e-e32ed0ee4aa0" (UID: "62a96508-72cd-4ec2-979e-e32ed0ee4aa0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.554582 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62a96508-72cd-4ec2-979e-e32ed0ee4aa0" (UID: "62a96508-72cd-4ec2-979e-e32ed0ee4aa0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.558766 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config" (OuterVolumeSpecName: "config") pod "62a96508-72cd-4ec2-979e-e32ed0ee4aa0" (UID: "62a96508-72cd-4ec2-979e-e32ed0ee4aa0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.575601 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjpk8\" (UniqueName: \"kubernetes.io/projected/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-kube-api-access-vjpk8\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.575650 4979 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.575662 4979 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.575673 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:40 crc kubenswrapper[4979]: I0130 23:15:40.575685 4979 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62a96508-72cd-4ec2-979e-e32ed0ee4aa0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.319518 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" event={"ID":"62a96508-72cd-4ec2-979e-e32ed0ee4aa0","Type":"ContainerDied","Data":"915cecbe9c6901c7d7d431835fc9e7decfca7d60cbb868415d35c96f21bde0b8"} Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.319619 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b676b66fc-rxm7v" Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.319803 4979 scope.go:117] "RemoveContainer" containerID="3bd53a6c84610f3c6412de74fc9d366392b6446b2c3ad18083828e64ec458fa2" Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.351386 4979 scope.go:117] "RemoveContainer" containerID="768d5b79edb701e759cd2a0fc62def57f1157b7fce4a7f7fc9d3ff38886f5e98" Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.369193 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.391695 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b676b66fc-rxm7v"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.680266 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.680499 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" containerID="cri-o://6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.690858 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.691124 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0f10fb19-9eb0-41eb-ba70-763c84417475" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://289ea2f878527cc1ce3d30aa55708642be8e3a359625e35a240bb392a2b265b3" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.703780 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.704060 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" containerName="nova-cell0-conductor-conductor" containerID="cri-o://b1be02d2bf255d1b81aa392709216377316fd4e1a002d3ec334823ab28566e4a" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.711343 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.711579 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" containerID="cri-o://abc237f618c829fbfeb4a9a4ef23c8e778e88ceedcbdce08bd22dead984035c4" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.711734 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" containerID="cri-o://15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.752538 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.752792 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" containerID="cri-o://7bb8706f925ad6381a16e97222bca04fa77399d32e5cbac62e5ced735b44c617" gracePeriod=30 Jan 30 23:15:41 crc kubenswrapper[4979]: I0130 23:15:41.752896 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" containerID="cri-o://679b3058f3f485f291f252eaf3bb8918f69b6e3a441b2d5608224e19d4b90456" gracePeriod=30 Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.173439 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.329584 4979 generic.go:334] "Generic (PLEG): container finished" podID="0f10fb19-9eb0-41eb-ba70-763c84417475" containerID="289ea2f878527cc1ce3d30aa55708642be8e3a359625e35a240bb392a2b265b3" exitCode=0 Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.329958 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0f10fb19-9eb0-41eb-ba70-763c84417475","Type":"ContainerDied","Data":"289ea2f878527cc1ce3d30aa55708642be8e3a359625e35a240bb392a2b265b3"} Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.331585 4979 generic.go:334] "Generic (PLEG): container finished" podID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerID="abc237f618c829fbfeb4a9a4ef23c8e778e88ceedcbdce08bd22dead984035c4" exitCode=143 Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.331617 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerDied","Data":"abc237f618c829fbfeb4a9a4ef23c8e778e88ceedcbdce08bd22dead984035c4"} Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.333764 4979 generic.go:334] "Generic (PLEG): container finished" podID="18e5930e-5323-4957-8495-8ccec47fcec1" containerID="7bb8706f925ad6381a16e97222bca04fa77399d32e5cbac62e5ced735b44c617" exitCode=143 Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.333782 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerDied","Data":"7bb8706f925ad6381a16e97222bca04fa77399d32e5cbac62e5ced735b44c617"} Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.612062 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.720378 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle\") pod \"0f10fb19-9eb0-41eb-ba70-763c84417475\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.720484 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data\") pod \"0f10fb19-9eb0-41eb-ba70-763c84417475\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.720625 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcjfm\" (UniqueName: \"kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm\") pod \"0f10fb19-9eb0-41eb-ba70-763c84417475\" (UID: \"0f10fb19-9eb0-41eb-ba70-763c84417475\") " Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.745496 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm" (OuterVolumeSpecName: "kube-api-access-jcjfm") pod "0f10fb19-9eb0-41eb-ba70-763c84417475" (UID: "0f10fb19-9eb0-41eb-ba70-763c84417475"). InnerVolumeSpecName "kube-api-access-jcjfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.749225 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f10fb19-9eb0-41eb-ba70-763c84417475" (UID: "0f10fb19-9eb0-41eb-ba70-763c84417475"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.749446 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data" (OuterVolumeSpecName: "config-data") pod "0f10fb19-9eb0-41eb-ba70-763c84417475" (UID: "0f10fb19-9eb0-41eb-ba70-763c84417475"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.823289 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.823760 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f10fb19-9eb0-41eb-ba70-763c84417475-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:42 crc kubenswrapper[4979]: I0130 23:15:42.823772 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcjfm\" (UniqueName: \"kubernetes.io/projected/0f10fb19-9eb0-41eb-ba70-763c84417475-kube-api-access-jcjfm\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:42 crc kubenswrapper[4979]: E0130 23:15:42.964454 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:42 crc kubenswrapper[4979]: E0130 23:15:42.967464 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:42 crc kubenswrapper[4979]: E0130 23:15:42.969362 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:42 crc kubenswrapper[4979]: E0130 23:15:42.969435 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.079396 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" path="/var/lib/kubelet/pods/62a96508-72cd-4ec2-979e-e32ed0ee4aa0/volumes" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.344627 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0f10fb19-9eb0-41eb-ba70-763c84417475","Type":"ContainerDied","Data":"5627b81b735a8cbed66e09b5ef728389679540accf46c25407eeb57a30eabe48"} Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.344673 4979 scope.go:117] "RemoveContainer" containerID="289ea2f878527cc1ce3d30aa55708642be8e3a359625e35a240bb392a2b265b3" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.344767 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.380210 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.395764 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.417101 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:43 crc kubenswrapper[4979]: E0130 23:15:43.417685 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="dnsmasq-dns" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.417751 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="dnsmasq-dns" Jan 30 23:15:43 crc kubenswrapper[4979]: E0130 23:15:43.417819 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f10fb19-9eb0-41eb-ba70-763c84417475" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.417882 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f10fb19-9eb0-41eb-ba70-763c84417475" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 23:15:43 crc kubenswrapper[4979]: E0130 23:15:43.417957 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="init" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.418014 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="init" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.418315 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a96508-72cd-4ec2-979e-e32ed0ee4aa0" containerName="dnsmasq-dns" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.418396 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f10fb19-9eb0-41eb-ba70-763c84417475" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.419100 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.421954 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.434618 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.537593 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6x5h\" (UniqueName: \"kubernetes.io/projected/f517549b-f450-42f3-9445-6b45713a7328-kube-api-access-m6x5h\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.537862 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.537957 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.640673 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.641131 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.641261 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6x5h\" (UniqueName: \"kubernetes.io/projected/f517549b-f450-42f3-9445-6b45713a7328-kube-api-access-m6x5h\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.647264 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.651245 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f517549b-f450-42f3-9445-6b45713a7328-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.668593 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6x5h\" (UniqueName: \"kubernetes.io/projected/f517549b-f450-42f3-9445-6b45713a7328-kube-api-access-m6x5h\") pod \"nova-cell1-novncproxy-0\" (UID: \"f517549b-f450-42f3-9445-6b45713a7328\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:43 crc kubenswrapper[4979]: I0130 23:15:43.738845 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.237434 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 23:15:44 crc kubenswrapper[4979]: W0130 23:15:44.246172 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf517549b_f450_42f3_9445_6b45713a7328.slice/crio-34f3a8c7407484fa140beec3891ca5533587a7056be8df6b5be1aab9d4c4fd4f WatchSource:0}: Error finding container 34f3a8c7407484fa140beec3891ca5533587a7056be8df6b5be1aab9d4c4fd4f: Status 404 returned error can't find the container with id 34f3a8c7407484fa140beec3891ca5533587a7056be8df6b5be1aab9d4c4fd4f Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.355968 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f517549b-f450-42f3-9445-6b45713a7328","Type":"ContainerStarted","Data":"34f3a8c7407484fa140beec3891ca5533587a7056be8df6b5be1aab9d4c4fd4f"} Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.361179 4979 generic.go:334] "Generic (PLEG): container finished" podID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" containerID="b1be02d2bf255d1b81aa392709216377316fd4e1a002d3ec334823ab28566e4a" exitCode=0 Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.361288 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fb85a-b378-482c-92d5-34f7f4e99e23","Type":"ContainerDied","Data":"b1be02d2bf255d1b81aa392709216377316fd4e1a002d3ec334823ab28566e4a"} Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.375573 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.455074 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data\") pod \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.455116 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle\") pod \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.455215 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvjxj\" (UniqueName: \"kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj\") pod \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\" (UID: \"bf4fb85a-b378-482c-92d5-34f7f4e99e23\") " Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.461138 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj" (OuterVolumeSpecName: "kube-api-access-hvjxj") pod "bf4fb85a-b378-482c-92d5-34f7f4e99e23" (UID: "bf4fb85a-b378-482c-92d5-34f7f4e99e23"). InnerVolumeSpecName "kube-api-access-hvjxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.479461 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf4fb85a-b378-482c-92d5-34f7f4e99e23" (UID: "bf4fb85a-b378-482c-92d5-34f7f4e99e23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.482122 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data" (OuterVolumeSpecName: "config-data") pod "bf4fb85a-b378-482c-92d5-34f7f4e99e23" (UID: "bf4fb85a-b378-482c-92d5-34f7f4e99e23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.557694 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvjxj\" (UniqueName: \"kubernetes.io/projected/bf4fb85a-b378-482c-92d5-34f7f4e99e23-kube-api-access-hvjxj\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.557743 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.557763 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fb85a-b378-482c-92d5-34f7f4e99e23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.886709 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": read tcp 10.217.0.2:45808->10.217.1.75:8775: read: connection reset by peer" Jan 30 23:15:44 crc kubenswrapper[4979]: I0130 23:15:44.886786 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": read tcp 10.217.0.2:45804->10.217.1.75:8775: read: connection reset by peer" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.099433 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f10fb19-9eb0-41eb-ba70-763c84417475" path="/var/lib/kubelet/pods/0f10fb19-9eb0-41eb-ba70-763c84417475/volumes" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.294194 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": read tcp 10.217.0.2:42732->10.217.1.76:8774: read: connection reset by peer" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.294257 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": read tcp 10.217.0.2:42734->10.217.1.76:8774: read: connection reset by peer" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.374955 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.375349 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="4b128be7-1d02-4fdc-aa5d-356001e694ce" containerName="nova-cell1-conductor-conductor" containerID="cri-o://d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d" gracePeriod=30 Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.390798 4979 generic.go:334] "Generic (PLEG): container finished" podID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerID="15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a" exitCode=0 Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.390883 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerDied","Data":"15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a"} Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.398217 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f517549b-f450-42f3-9445-6b45713a7328","Type":"ContainerStarted","Data":"a9bf50ac422fb82972a106094af6a40acd911fa9017c807b57bb5857511dc6b1"} Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.423270 4979 generic.go:334] "Generic (PLEG): container finished" podID="18e5930e-5323-4957-8495-8ccec47fcec1" containerID="679b3058f3f485f291f252eaf3bb8918f69b6e3a441b2d5608224e19d4b90456" exitCode=0 Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.423384 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerDied","Data":"679b3058f3f485f291f252eaf3bb8918f69b6e3a441b2d5608224e19d4b90456"} Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.423412 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"18e5930e-5323-4957-8495-8ccec47fcec1","Type":"ContainerDied","Data":"9cc4bcac87294ecc35a0ba1173d943f288303feac35baadc83a45c516f8e77dd"} Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.423423 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc4bcac87294ecc35a0ba1173d943f288303feac35baadc83a45c516f8e77dd" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.429422 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.42940412 podStartE2EDuration="2.42940412s" podCreationTimestamp="2026-01-30 23:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:45.424841457 +0000 UTC m=+5741.386088480" watchObservedRunningTime="2026-01-30 23:15:45.42940412 +0000 UTC m=+5741.390651153" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.441307 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fb85a-b378-482c-92d5-34f7f4e99e23","Type":"ContainerDied","Data":"291809dc6734d5a9dd972c012cc5bf6b3603448d28e739ac608b8b509bef5d72"} Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.441361 4979 scope.go:117] "RemoveContainer" containerID="b1be02d2bf255d1b81aa392709216377316fd4e1a002d3ec334823ab28566e4a" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.441544 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: E0130 23:15:45.443655 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf4fb85a_b378_482c_92d5_34f7f4e99e23.slice/crio-291809dc6734d5a9dd972c012cc5bf6b3603448d28e739ac608b8b509bef5d72\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a89116a_a8b5_4bf6_8e13_ec81f6b7a8c6.slice/crio-15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf4fb85a_b378_482c_92d5_34f7f4e99e23.slice\": RecentStats: unable to find data in memory cache]" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.465189 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.497095 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.510081 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528261 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:45 crc kubenswrapper[4979]: E0130 23:15:45.528616 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" containerName="nova-cell0-conductor-conductor" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528646 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" containerName="nova-cell0-conductor-conductor" Jan 30 23:15:45 crc kubenswrapper[4979]: E0130 23:15:45.528669 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528675 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" Jan 30 23:15:45 crc kubenswrapper[4979]: E0130 23:15:45.528695 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528702 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528870 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-log" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528893 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" containerName="nova-metadata-metadata" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.528908 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" containerName="nova-cell0-conductor-conductor" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.529473 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.538485 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.541179 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.584854 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle\") pod \"18e5930e-5323-4957-8495-8ccec47fcec1\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.584937 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs\") pod \"18e5930e-5323-4957-8495-8ccec47fcec1\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.584978 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data\") pod \"18e5930e-5323-4957-8495-8ccec47fcec1\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.585015 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzlfd\" (UniqueName: \"kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd\") pod \"18e5930e-5323-4957-8495-8ccec47fcec1\" (UID: \"18e5930e-5323-4957-8495-8ccec47fcec1\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.585322 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.585352 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rvqh\" (UniqueName: \"kubernetes.io/projected/274c05f8-cb23-41d5-b911-5d13bac207a0-kube-api-access-7rvqh\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.585407 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.586597 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs" (OuterVolumeSpecName: "logs") pod "18e5930e-5323-4957-8495-8ccec47fcec1" (UID: "18e5930e-5323-4957-8495-8ccec47fcec1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.596801 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd" (OuterVolumeSpecName: "kube-api-access-gzlfd") pod "18e5930e-5323-4957-8495-8ccec47fcec1" (UID: "18e5930e-5323-4957-8495-8ccec47fcec1"). InnerVolumeSpecName "kube-api-access-gzlfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.611256 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data" (OuterVolumeSpecName: "config-data") pod "18e5930e-5323-4957-8495-8ccec47fcec1" (UID: "18e5930e-5323-4957-8495-8ccec47fcec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.639700 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18e5930e-5323-4957-8495-8ccec47fcec1" (UID: "18e5930e-5323-4957-8495-8ccec47fcec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687185 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687238 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rvqh\" (UniqueName: \"kubernetes.io/projected/274c05f8-cb23-41d5-b911-5d13bac207a0-kube-api-access-7rvqh\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687293 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687434 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687448 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18e5930e-5323-4957-8495-8ccec47fcec1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687458 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e5930e-5323-4957-8495-8ccec47fcec1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.687466 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzlfd\" (UniqueName: \"kubernetes.io/projected/18e5930e-5323-4957-8495-8ccec47fcec1-kube-api-access-gzlfd\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.691866 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.714696 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274c05f8-cb23-41d5-b911-5d13bac207a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.719127 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rvqh\" (UniqueName: \"kubernetes.io/projected/274c05f8-cb23-41d5-b911-5d13bac207a0-kube-api-access-7rvqh\") pod \"nova-cell0-conductor-0\" (UID: \"274c05f8-cb23-41d5-b911-5d13bac207a0\") " pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.862657 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.929781 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.997813 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnbdt\" (UniqueName: \"kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt\") pod \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.997919 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs\") pod \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.997975 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data\") pod \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.998049 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle\") pod \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\" (UID: \"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6\") " Jan 30 23:15:45 crc kubenswrapper[4979]: I0130 23:15:45.998661 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs" (OuterVolumeSpecName: "logs") pod "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" (UID: "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.004520 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt" (OuterVolumeSpecName: "kube-api-access-hnbdt") pod "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" (UID: "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6"). InnerVolumeSpecName "kube-api-access-hnbdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.026122 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data" (OuterVolumeSpecName: "config-data") pod "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" (UID: "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.032527 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" (UID: "0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.100163 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnbdt\" (UniqueName: \"kubernetes.io/projected/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-kube-api-access-hnbdt\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.100223 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.100240 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.100252 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.370538 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.454286 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"274c05f8-cb23-41d5-b911-5d13bac207a0","Type":"ContainerStarted","Data":"fdf757bd46ff2870fd14951aace91217de546f0fe3d3fdb86963926f8a1ad039"} Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.467007 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6","Type":"ContainerDied","Data":"fccc7af6d03a24335474c436786fa98ab8e787f6f9254489aeae84b9f43ab229"} Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.467096 4979 scope.go:117] "RemoveContainer" containerID="15299fab35d50937a89790172b2c509a1dfb9f92863a936ca37bdeb7cff0153a" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.467275 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.467438 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.499906 4979 scope.go:117] "RemoveContainer" containerID="abc237f618c829fbfeb4a9a4ef23c8e778e88ceedcbdce08bd22dead984035c4" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.546719 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.570101 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.590924 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: E0130 23:15:46.591334 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.591355 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" Jan 30 23:15:46 crc kubenswrapper[4979]: E0130 23:15:46.591383 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.591390 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.591566 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-log" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.591581 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" containerName="nova-api-api" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.606125 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.611467 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.612677 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.625318 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.633606 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.649162 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.651083 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.654065 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.664201 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.715473 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.715512 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-config-data\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.715548 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.715569 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb4fm\" (UniqueName: \"kubernetes.io/projected/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-kube-api-access-tb4fm\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.715911 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1269d92-1612-453c-8e80-29981ced4aca-logs\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.716119 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-config-data\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.716171 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-logs\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.716239 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhvjr\" (UniqueName: \"kubernetes.io/projected/a1269d92-1612-453c-8e80-29981ced4aca-kube-api-access-xhvjr\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818455 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-config-data\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818540 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818565 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb4fm\" (UniqueName: \"kubernetes.io/projected/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-kube-api-access-tb4fm\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818622 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1269d92-1612-453c-8e80-29981ced4aca-logs\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818667 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-config-data\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818689 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-logs\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.818716 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhvjr\" (UniqueName: \"kubernetes.io/projected/a1269d92-1612-453c-8e80-29981ced4aca-kube-api-access-xhvjr\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.819253 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1269d92-1612-453c-8e80-29981ced4aca-logs\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.819387 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-logs\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.823228 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-config-data\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.823803 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-config-data\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.824642 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1269d92-1612-453c-8e80-29981ced4aca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.835411 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb4fm\" (UniqueName: \"kubernetes.io/projected/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-kube-api-access-tb4fm\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.840501 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhvjr\" (UniqueName: \"kubernetes.io/projected/a1269d92-1612-453c-8e80-29981ced4aca-kube-api-access-xhvjr\") pod \"nova-metadata-0\" (UID: \"a1269d92-1612-453c-8e80-29981ced4aca\") " pod="openstack/nova-metadata-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.840982 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce01f4b-19ef-4c0b-ab4c-f76e96297fde-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde\") " pod="openstack/nova-api-0" Jan 30 23:15:46 crc kubenswrapper[4979]: I0130 23:15:46.933644 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.006810 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.082940 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6" path="/var/lib/kubelet/pods/0a89116a-a8b5-4bf6-8e13-ec81f6b7a8c6/volumes" Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.083698 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e5930e-5323-4957-8495-8ccec47fcec1" path="/var/lib/kubelet/pods/18e5930e-5323-4957-8495-8ccec47fcec1/volumes" Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.084348 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4fb85a-b378-482c-92d5-34f7f4e99e23" path="/var/lib/kubelet/pods/bf4fb85a-b378-482c-92d5-34f7f4e99e23/volumes" Jan 30 23:15:47 crc kubenswrapper[4979]: W0130 23:15:47.424688 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1269d92_1612_453c_8e80_29981ced4aca.slice/crio-e589ab09299bd93bb140bd899b1dcf4b687b7bdb33d5b19644d5ec0b85e4ac80 WatchSource:0}: Error finding container e589ab09299bd93bb140bd899b1dcf4b687b7bdb33d5b19644d5ec0b85e4ac80: Status 404 returned error can't find the container with id e589ab09299bd93bb140bd899b1dcf4b687b7bdb33d5b19644d5ec0b85e4ac80 Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.427156 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.483228 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1269d92-1612-453c-8e80-29981ced4aca","Type":"ContainerStarted","Data":"e589ab09299bd93bb140bd899b1dcf4b687b7bdb33d5b19644d5ec0b85e4ac80"} Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.489434 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"274c05f8-cb23-41d5-b911-5d13bac207a0","Type":"ContainerStarted","Data":"8c131e53c89ddc510dcc01d0c60f598338f0f65e94b54eed23677c89ebaca21b"} Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.489532 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.509054 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 23:15:47 crc kubenswrapper[4979]: I0130 23:15:47.512792 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.512771776 podStartE2EDuration="2.512771776s" podCreationTimestamp="2026-01-30 23:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:47.505270613 +0000 UTC m=+5743.466517646" watchObservedRunningTime="2026-01-30 23:15:47.512771776 +0000 UTC m=+5743.474018819" Jan 30 23:15:47 crc kubenswrapper[4979]: W0130 23:15:47.527832 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce01f4b_19ef_4c0b_ab4c_f76e96297fde.slice/crio-b3fda4159793e901edddbe239e480189fd748f8dc04f6db8d4155b01bf37ce95 WatchSource:0}: Error finding container b3fda4159793e901edddbe239e480189fd748f8dc04f6db8d4155b01bf37ce95: Status 404 returned error can't find the container with id b3fda4159793e901edddbe239e480189fd748f8dc04f6db8d4155b01bf37ce95 Jan 30 23:15:47 crc kubenswrapper[4979]: E0130 23:15:47.967054 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:47 crc kubenswrapper[4979]: E0130 23:15:47.972814 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:47 crc kubenswrapper[4979]: E0130 23:15:47.977284 4979 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 23:15:47 crc kubenswrapper[4979]: E0130 23:15:47.977366 4979 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.344660 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.451136 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qld68\" (UniqueName: \"kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68\") pod \"4b128be7-1d02-4fdc-aa5d-356001e694ce\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.451290 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle\") pod \"4b128be7-1d02-4fdc-aa5d-356001e694ce\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.451414 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data\") pod \"4b128be7-1d02-4fdc-aa5d-356001e694ce\" (UID: \"4b128be7-1d02-4fdc-aa5d-356001e694ce\") " Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.457724 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68" (OuterVolumeSpecName: "kube-api-access-qld68") pod "4b128be7-1d02-4fdc-aa5d-356001e694ce" (UID: "4b128be7-1d02-4fdc-aa5d-356001e694ce"). InnerVolumeSpecName "kube-api-access-qld68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.477861 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b128be7-1d02-4fdc-aa5d-356001e694ce" (UID: "4b128be7-1d02-4fdc-aa5d-356001e694ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.485965 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data" (OuterVolumeSpecName: "config-data") pod "4b128be7-1d02-4fdc-aa5d-356001e694ce" (UID: "4b128be7-1d02-4fdc-aa5d-356001e694ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.506120 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde","Type":"ContainerStarted","Data":"8698d5566fb9becbea0cb5eb481ef65f2ac7dbbe013ea0c1399eb23ab4418ddf"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.506171 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde","Type":"ContainerStarted","Data":"cbbc55d21c6c217dc75f5a688ab29119f5135fbb2c72463dfa4d37ebd13896bc"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.506186 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce01f4b-19ef-4c0b-ab4c-f76e96297fde","Type":"ContainerStarted","Data":"b3fda4159793e901edddbe239e480189fd748f8dc04f6db8d4155b01bf37ce95"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.510497 4979 generic.go:334] "Generic (PLEG): container finished" podID="4b128be7-1d02-4fdc-aa5d-356001e694ce" containerID="d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d" exitCode=0 Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.510558 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.510586 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b128be7-1d02-4fdc-aa5d-356001e694ce","Type":"ContainerDied","Data":"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.510863 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b128be7-1d02-4fdc-aa5d-356001e694ce","Type":"ContainerDied","Data":"dbd9dee23baab194c4b7ba7a0c9558a9771dc7905ed62cf49005905c307d1f4a"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.510915 4979 scope.go:117] "RemoveContainer" containerID="d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.514406 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1269d92-1612-453c-8e80-29981ced4aca","Type":"ContainerStarted","Data":"561e3f9f1d7870b9e791b62cb8972174bf7978d5d5f354a599b2e8b6ad4aee7d"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.514434 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1269d92-1612-453c-8e80-29981ced4aca","Type":"ContainerStarted","Data":"2d9b5135bee00c293bcad10b1cf9ecdabb42cdc93b5a33efade4ba2531397afb"} Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.536854 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.536826257 podStartE2EDuration="2.536826257s" podCreationTimestamp="2026-01-30 23:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:48.524155193 +0000 UTC m=+5744.485402226" watchObservedRunningTime="2026-01-30 23:15:48.536826257 +0000 UTC m=+5744.498073290" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.554095 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.554130 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b128be7-1d02-4fdc-aa5d-356001e694ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.554141 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qld68\" (UniqueName: \"kubernetes.io/projected/4b128be7-1d02-4fdc-aa5d-356001e694ce-kube-api-access-qld68\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.567970 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.56794478 podStartE2EDuration="2.56794478s" podCreationTimestamp="2026-01-30 23:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:48.546936581 +0000 UTC m=+5744.508183624" watchObservedRunningTime="2026-01-30 23:15:48.56794478 +0000 UTC m=+5744.529191813" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.584289 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.595597 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.602380 4979 scope.go:117] "RemoveContainer" containerID="d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d" Jan 30 23:15:48 crc kubenswrapper[4979]: E0130 23:15:48.602955 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d\": container with ID starting with d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d not found: ID does not exist" containerID="d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.602989 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d"} err="failed to get container status \"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d\": rpc error: code = NotFound desc = could not find container \"d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d\": container with ID starting with d8be1ba5908389e8453067f311d905b6b1068c69a1be4001d39192efd925b31d not found: ID does not exist" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.606888 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:48 crc kubenswrapper[4979]: E0130 23:15:48.607463 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b128be7-1d02-4fdc-aa5d-356001e694ce" containerName="nova-cell1-conductor-conductor" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.607485 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b128be7-1d02-4fdc-aa5d-356001e694ce" containerName="nova-cell1-conductor-conductor" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.607665 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b128be7-1d02-4fdc-aa5d-356001e694ce" containerName="nova-cell1-conductor-conductor" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.608318 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.615053 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.656682 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.656917 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.656970 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdk9d\" (UniqueName: \"kubernetes.io/projected/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-kube-api-access-bdk9d\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.670895 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.739438 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.759498 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.759936 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.759999 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdk9d\" (UniqueName: \"kubernetes.io/projected/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-kube-api-access-bdk9d\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.763156 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.769351 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:48 crc kubenswrapper[4979]: I0130 23:15:48.782067 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdk9d\" (UniqueName: \"kubernetes.io/projected/3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1-kube-api-access-bdk9d\") pod \"nova-cell1-conductor-0\" (UID: \"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1\") " pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:49 crc kubenswrapper[4979]: I0130 23:15:49.004113 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:49 crc kubenswrapper[4979]: I0130 23:15:49.081056 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b128be7-1d02-4fdc-aa5d-356001e694ce" path="/var/lib/kubelet/pods/4b128be7-1d02-4fdc-aa5d-356001e694ce/volumes" Jan 30 23:15:49 crc kubenswrapper[4979]: I0130 23:15:49.459958 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 23:15:49 crc kubenswrapper[4979]: W0130 23:15:49.465444 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab6e2f8_0934_41a0_b35e_0c6e0b5dacd1.slice/crio-450a50c236ff05258a5c8bb5a43afbc941f9e0e41f091ceb07b94b8ec71db35d WatchSource:0}: Error finding container 450a50c236ff05258a5c8bb5a43afbc941f9e0e41f091ceb07b94b8ec71db35d: Status 404 returned error can't find the container with id 450a50c236ff05258a5c8bb5a43afbc941f9e0e41f091ceb07b94b8ec71db35d Jan 30 23:15:49 crc kubenswrapper[4979]: I0130 23:15:49.533379 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1","Type":"ContainerStarted","Data":"450a50c236ff05258a5c8bb5a43afbc941f9e0e41f091ceb07b94b8ec71db35d"} Jan 30 23:15:50 crc kubenswrapper[4979]: I0130 23:15:50.548952 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1","Type":"ContainerStarted","Data":"6424cf9673576dec58b8e9cb3967c89696fd289997cfe1a9b161158c866792fb"} Jan 30 23:15:50 crc kubenswrapper[4979]: I0130 23:15:50.549474 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:50 crc kubenswrapper[4979]: I0130 23:15:50.590365 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.590332344 podStartE2EDuration="2.590332344s" podCreationTimestamp="2026-01-30 23:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:50.580334584 +0000 UTC m=+5746.541581617" watchObservedRunningTime="2026-01-30 23:15:50.590332344 +0000 UTC m=+5746.551579417" Jan 30 23:15:51 crc kubenswrapper[4979]: I0130 23:15:51.934828 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:15:51 crc kubenswrapper[4979]: I0130 23:15:51.936947 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.070611 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:15:52 crc kubenswrapper[4979]: E0130 23:15:52.070913 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.109905 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.234449 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle\") pod \"f0607a76-8412-4547-945c-f5672e9516f8\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.234542 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data\") pod \"f0607a76-8412-4547-945c-f5672e9516f8\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.234664 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm2d4\" (UniqueName: \"kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4\") pod \"f0607a76-8412-4547-945c-f5672e9516f8\" (UID: \"f0607a76-8412-4547-945c-f5672e9516f8\") " Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.243998 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4" (OuterVolumeSpecName: "kube-api-access-bm2d4") pod "f0607a76-8412-4547-945c-f5672e9516f8" (UID: "f0607a76-8412-4547-945c-f5672e9516f8"). InnerVolumeSpecName "kube-api-access-bm2d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.278278 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0607a76-8412-4547-945c-f5672e9516f8" (UID: "f0607a76-8412-4547-945c-f5672e9516f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.290509 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data" (OuterVolumeSpecName: "config-data") pod "f0607a76-8412-4547-945c-f5672e9516f8" (UID: "f0607a76-8412-4547-945c-f5672e9516f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.336528 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm2d4\" (UniqueName: \"kubernetes.io/projected/f0607a76-8412-4547-945c-f5672e9516f8-kube-api-access-bm2d4\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.336568 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.336580 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0607a76-8412-4547-945c-f5672e9516f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.575883 4979 generic.go:334] "Generic (PLEG): container finished" podID="f0607a76-8412-4547-945c-f5672e9516f8" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" exitCode=0 Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.575970 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0607a76-8412-4547-945c-f5672e9516f8","Type":"ContainerDied","Data":"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b"} Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.576044 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0607a76-8412-4547-945c-f5672e9516f8","Type":"ContainerDied","Data":"97dcc53461a420d86c88ad2c9e5439b13ea4d32d8913b4f9be12a15f52d97f4b"} Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.576067 4979 scope.go:117] "RemoveContainer" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.576058 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.634812 4979 scope.go:117] "RemoveContainer" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" Jan 30 23:15:52 crc kubenswrapper[4979]: E0130 23:15:52.636180 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b\": container with ID starting with 6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b not found: ID does not exist" containerID="6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.636242 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b"} err="failed to get container status \"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b\": rpc error: code = NotFound desc = could not find container \"6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b\": container with ID starting with 6fd3b3477a7b04f20cf804bf64e445e799baa53819be4e80f63755d6923ebb9b not found: ID does not exist" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.639791 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.659602 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.682790 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:52 crc kubenswrapper[4979]: E0130 23:15:52.684408 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.684443 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.685241 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0607a76-8412-4547-945c-f5672e9516f8" containerName="nova-scheduler-scheduler" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.693423 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.711395 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.719598 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.745061 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.745117 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-config-data\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.745159 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c74rx\" (UniqueName: \"kubernetes.io/projected/b6d75777-1cab-4bbc-ab03-361b03c488f4-kube-api-access-c74rx\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.846322 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.846380 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-config-data\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.846422 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c74rx\" (UniqueName: \"kubernetes.io/projected/b6d75777-1cab-4bbc-ab03-361b03c488f4-kube-api-access-c74rx\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.852044 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.853986 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d75777-1cab-4bbc-ab03-361b03c488f4-config-data\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:52 crc kubenswrapper[4979]: I0130 23:15:52.868466 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c74rx\" (UniqueName: \"kubernetes.io/projected/b6d75777-1cab-4bbc-ab03-361b03c488f4-kube-api-access-c74rx\") pod \"nova-scheduler-0\" (UID: \"b6d75777-1cab-4bbc-ab03-361b03c488f4\") " pod="openstack/nova-scheduler-0" Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.032914 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.080004 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0607a76-8412-4547-945c-f5672e9516f8" path="/var/lib/kubelet/pods/f0607a76-8412-4547-945c-f5672e9516f8/volumes" Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.531095 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 23:15:53 crc kubenswrapper[4979]: W0130 23:15:53.535688 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6d75777_1cab_4bbc_ab03_361b03c488f4.slice/crio-438310423a0549b121f76108c44d7c95f47c2f945ba10720d8885f1d7ac83e0c WatchSource:0}: Error finding container 438310423a0549b121f76108c44d7c95f47c2f945ba10720d8885f1d7ac83e0c: Status 404 returned error can't find the container with id 438310423a0549b121f76108c44d7c95f47c2f945ba10720d8885f1d7ac83e0c Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.589069 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b6d75777-1cab-4bbc-ab03-361b03c488f4","Type":"ContainerStarted","Data":"438310423a0549b121f76108c44d7c95f47c2f945ba10720d8885f1d7ac83e0c"} Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.739828 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:53 crc kubenswrapper[4979]: I0130 23:15:53.752318 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:54 crc kubenswrapper[4979]: I0130 23:15:54.049344 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 23:15:54 crc kubenswrapper[4979]: I0130 23:15:54.598947 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b6d75777-1cab-4bbc-ab03-361b03c488f4","Type":"ContainerStarted","Data":"928f87c3d45ff898d1d8dde5da3c9babebfd3c579208f6746eb5c698d0537949"} Jan 30 23:15:54 crc kubenswrapper[4979]: I0130 23:15:54.615453 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 23:15:54 crc kubenswrapper[4979]: I0130 23:15:54.624227 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.624204669 podStartE2EDuration="2.624204669s" podCreationTimestamp="2026-01-30 23:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:15:54.622819162 +0000 UTC m=+5750.584066195" watchObservedRunningTime="2026-01-30 23:15:54.624204669 +0000 UTC m=+5750.585451702" Jan 30 23:15:55 crc kubenswrapper[4979]: I0130 23:15:55.898522 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 23:15:56 crc kubenswrapper[4979]: I0130 23:15:56.936626 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:15:56 crc kubenswrapper[4979]: I0130 23:15:56.936922 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 23:15:57 crc kubenswrapper[4979]: I0130 23:15:57.008662 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:15:57 crc kubenswrapper[4979]: I0130 23:15:57.008709 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 23:15:58 crc kubenswrapper[4979]: I0130 23:15:58.025656 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a1269d92-1612-453c-8e80-29981ced4aca" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.87:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:15:58 crc kubenswrapper[4979]: I0130 23:15:58.025979 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a1269d92-1612-453c-8e80-29981ced4aca" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.87:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:15:58 crc kubenswrapper[4979]: I0130 23:15:58.033479 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 23:15:58 crc kubenswrapper[4979]: I0130 23:15:58.109372 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ce01f4b-19ef-4c0b-ab4c-f76e96297fde" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.88:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:15:58 crc kubenswrapper[4979]: I0130 23:15:58.110330 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ce01f4b-19ef-4c0b-ab4c-f76e96297fde" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.88:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.060644 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.062761 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.065572 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.097072 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.181946 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.182002 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.182060 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.182080 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.182113 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kb8g\" (UniqueName: \"kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.182132 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.283877 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.283930 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.283958 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.283984 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.284019 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kb8g\" (UniqueName: \"kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.284053 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.284657 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.292489 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.293211 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.303783 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.304652 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kb8g\" (UniqueName: \"kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.345922 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.397834 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:15:59 crc kubenswrapper[4979]: I0130 23:15:59.880975 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.140479 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.141589 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api-log" containerID="cri-o://436b6e4f92a01386ad3771816421209270d94a742621f8204dcf3dd212a924ec" gracePeriod=30 Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.141760 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api" containerID="cri-o://e76ef2119f87eb1d394842edac3278d0622498b6594da66e394b4ae5f6cc97f2" gracePeriod=30 Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.657148 4979 generic.go:334] "Generic (PLEG): container finished" podID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerID="436b6e4f92a01386ad3771816421209270d94a742621f8204dcf3dd212a924ec" exitCode=143 Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.657236 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerDied","Data":"436b6e4f92a01386ad3771816421209270d94a742621f8204dcf3dd212a924ec"} Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.659872 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerStarted","Data":"f34f6ceb54e852f4c063e802dbc7f5e8ca92abd2aab5ad9f8f928c8ae9b4ca33"} Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.659908 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerStarted","Data":"4426c069aa3a6213267e35e8b2382f441791df8002dd43fd176594900d983cdf"} Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.918797 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.920282 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.922127 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 30 23:16:00 crc kubenswrapper[4979]: I0130 23:16:00.939272 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021126 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021432 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021456 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021472 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021489 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021514 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021534 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-run\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021580 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvq7s\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-kube-api-access-wvq7s\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021607 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021633 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021652 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021671 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021704 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021724 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.021748 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123453 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123523 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123538 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123557 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123585 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123605 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-run\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123635 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123650 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvq7s\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-kube-api-access-wvq7s\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123681 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123711 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123741 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123759 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123793 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123823 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123849 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.123949 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.124879 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.124987 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125044 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-run\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125201 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125209 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125242 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125238 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125305 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.125434 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3aa75164-0d7b-4b9a-a21d-2c5834956114-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.129817 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.129920 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.129953 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.135578 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.145622 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3aa75164-0d7b-4b9a-a21d-2c5834956114-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.146082 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvq7s\" (UniqueName: \"kubernetes.io/projected/3aa75164-0d7b-4b9a-a21d-2c5834956114-kube-api-access-wvq7s\") pod \"cinder-volume-volume1-0\" (UID: \"3aa75164-0d7b-4b9a-a21d-2c5834956114\") " pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.298952 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.468228 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.491763 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.491890 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.495814 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535243 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535280 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535318 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535339 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7h6w\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-kube-api-access-m7h6w\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535360 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535423 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535446 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-ceph\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535469 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-dev\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535528 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535567 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535598 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535619 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-lib-modules\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535633 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-run\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535662 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535697 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-scripts\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.535719 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-sys\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641184 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-scripts\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641235 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-sys\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641265 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641284 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641337 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-sys\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641400 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641408 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641461 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641485 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641522 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7h6w\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-kube-api-access-m7h6w\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641543 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641719 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.641909 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642324 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-ceph\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642352 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-dev\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642376 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642399 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642422 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642444 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-lib-modules\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642445 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-dev\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642459 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-run\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.642491 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.643272 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-lib-modules\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.643289 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-run\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.643355 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.643353 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c3e02f71-2ffc-45bb-9344-28ff1640cffd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.646183 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-scripts\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.646413 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.646528 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.647662 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-ceph\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.653084 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e02f71-2ffc-45bb-9344-28ff1640cffd-config-data\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.657766 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7h6w\" (UniqueName: \"kubernetes.io/projected/c3e02f71-2ffc-45bb-9344-28ff1640cffd-kube-api-access-m7h6w\") pod \"cinder-backup-0\" (UID: \"c3e02f71-2ffc-45bb-9344-28ff1640cffd\") " pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.679422 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerStarted","Data":"11a0850d9a45d42789690dcbd834d11974f8f84ffa75d002d1ee278eca1fedce"} Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.706897 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.706874064 podStartE2EDuration="2.706874064s" podCreationTimestamp="2026-01-30 23:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:16:01.699893925 +0000 UTC m=+5757.661140958" watchObservedRunningTime="2026-01-30 23:16:01.706874064 +0000 UTC m=+5757.668121097" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.831524 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.886725 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 23:16:01 crc kubenswrapper[4979]: I0130 23:16:01.889153 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 23:16:02 crc kubenswrapper[4979]: I0130 23:16:02.418773 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 23:16:02 crc kubenswrapper[4979]: W0130 23:16:02.421516 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3e02f71_2ffc_45bb_9344_28ff1640cffd.slice/crio-a590f8014dab172e9744c9545e4b71ce4839a52b56b8dc10d950233c94ca0d08 WatchSource:0}: Error finding container a590f8014dab172e9744c9545e4b71ce4839a52b56b8dc10d950233c94ca0d08: Status 404 returned error can't find the container with id a590f8014dab172e9744c9545e4b71ce4839a52b56b8dc10d950233c94ca0d08 Jan 30 23:16:02 crc kubenswrapper[4979]: I0130 23:16:02.690703 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c3e02f71-2ffc-45bb-9344-28ff1640cffd","Type":"ContainerStarted","Data":"a590f8014dab172e9744c9545e4b71ce4839a52b56b8dc10d950233c94ca0d08"} Jan 30 23:16:02 crc kubenswrapper[4979]: I0130 23:16:02.691957 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3aa75164-0d7b-4b9a-a21d-2c5834956114","Type":"ContainerStarted","Data":"589a37400598162d558e5fa543ddb7712657c0944f4c5b20620cd378be4d18d8"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.033474 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.063390 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.702583 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3aa75164-0d7b-4b9a-a21d-2c5834956114","Type":"ContainerStarted","Data":"ca1c98fc9167e72ffec190cf39ce7fadd4aff01988d7f7a18950a4a96f293e92"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.703942 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3aa75164-0d7b-4b9a-a21d-2c5834956114","Type":"ContainerStarted","Data":"0b4aab30be3d2ea4fccc4119bc2f102c24d2f8e4b1cc72c0baa72ac8cabe8b71"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.705001 4979 generic.go:334] "Generic (PLEG): container finished" podID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerID="e76ef2119f87eb1d394842edac3278d0622498b6594da66e394b4ae5f6cc97f2" exitCode=0 Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.705063 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerDied","Data":"e76ef2119f87eb1d394842edac3278d0622498b6594da66e394b4ae5f6cc97f2"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.716706 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c3e02f71-2ffc-45bb-9344-28ff1640cffd","Type":"ContainerStarted","Data":"f8ab7a6798449f8a857462d0197aa5edf748ba12b8453c7e3ce4db0501aa2adc"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.716888 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c3e02f71-2ffc-45bb-9344-28ff1640cffd","Type":"ContainerStarted","Data":"a3ba727f0fa2aa37641c80860539b01715d738587555b835e31222add036a961"} Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.734307 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.992716162 podStartE2EDuration="3.734284115s" podCreationTimestamp="2026-01-30 23:16:00 +0000 UTC" firstStartedPulling="2026-01-30 23:16:01.888869891 +0000 UTC m=+5757.850116924" lastFinishedPulling="2026-01-30 23:16:02.630437844 +0000 UTC m=+5758.591684877" observedRunningTime="2026-01-30 23:16:03.725775735 +0000 UTC m=+5759.687022768" watchObservedRunningTime="2026-01-30 23:16:03.734284115 +0000 UTC m=+5759.695531148" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.781477 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.785477 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.819058 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.125153244 podStartE2EDuration="2.819019498s" podCreationTimestamp="2026-01-30 23:16:01 +0000 UTC" firstStartedPulling="2026-01-30 23:16:02.424395086 +0000 UTC m=+5758.385642139" lastFinishedPulling="2026-01-30 23:16:03.11826136 +0000 UTC m=+5759.079508393" observedRunningTime="2026-01-30 23:16:03.762364765 +0000 UTC m=+5759.723611788" watchObservedRunningTime="2026-01-30 23:16:03.819019498 +0000 UTC m=+5759.780266531" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890667 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890708 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wnld\" (UniqueName: \"kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890746 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890879 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890927 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890964 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.890980 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data\") pod \"e5c27922-b152-465f-b0fe-117e336c7ae0\" (UID: \"e5c27922-b152-465f-b0fe-117e336c7ae0\") " Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.891174 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.891895 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5c27922-b152-465f-b0fe-117e336c7ae0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.892190 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs" (OuterVolumeSpecName: "logs") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.920837 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld" (OuterVolumeSpecName: "kube-api-access-9wnld") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "kube-api-access-9wnld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.934104 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.963474 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts" (OuterVolumeSpecName: "scripts") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.993887 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c27922-b152-465f-b0fe-117e336c7ae0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.993925 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wnld\" (UniqueName: \"kubernetes.io/projected/e5c27922-b152-465f-b0fe-117e336c7ae0-kube-api-access-9wnld\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.993940 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:03 crc kubenswrapper[4979]: I0130 23:16:03.993952 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.023200 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data" (OuterVolumeSpecName: "config-data") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.045628 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5c27922-b152-465f-b0fe-117e336c7ae0" (UID: "e5c27922-b152-465f-b0fe-117e336c7ae0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.095775 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.095810 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c27922-b152-465f-b0fe-117e336c7ae0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.399259 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.738505 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.743016 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5c27922-b152-465f-b0fe-117e336c7ae0","Type":"ContainerDied","Data":"5008657448c2eb55e18f087e3d982962674118c540ee915b230deaa21dab8bb1"} Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.743116 4979 scope.go:117] "RemoveContainer" containerID="e76ef2119f87eb1d394842edac3278d0622498b6594da66e394b4ae5f6cc97f2" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.788229 4979 scope.go:117] "RemoveContainer" containerID="436b6e4f92a01386ad3771816421209270d94a742621f8204dcf3dd212a924ec" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.793800 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.818800 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.846206 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:04 crc kubenswrapper[4979]: E0130 23:16:04.846967 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api-log" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.847076 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api-log" Jan 30 23:16:04 crc kubenswrapper[4979]: E0130 23:16:04.847175 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.847240 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.847535 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.847625 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" containerName="cinder-api-log" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.849086 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.851665 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.851710 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914289 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914344 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914399 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d24af8b-b86a-4604-82a5-e3d014dba7b5-logs\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914423 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d24af8b-b86a-4604-82a5-e3d014dba7b5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914446 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914464 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqtn\" (UniqueName: \"kubernetes.io/projected/5d24af8b-b86a-4604-82a5-e3d014dba7b5-kube-api-access-nbqtn\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:04 crc kubenswrapper[4979]: I0130 23:16:04.914481 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-scripts\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016118 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016226 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016296 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d24af8b-b86a-4604-82a5-e3d014dba7b5-logs\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016405 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d24af8b-b86a-4604-82a5-e3d014dba7b5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016435 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016459 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbqtn\" (UniqueName: \"kubernetes.io/projected/5d24af8b-b86a-4604-82a5-e3d014dba7b5-kube-api-access-nbqtn\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.016488 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-scripts\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.017794 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d24af8b-b86a-4604-82a5-e3d014dba7b5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.018665 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d24af8b-b86a-4604-82a5-e3d014dba7b5-logs\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.023154 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.024893 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-config-data\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.025300 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-scripts\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.033743 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d24af8b-b86a-4604-82a5-e3d014dba7b5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.036375 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbqtn\" (UniqueName: \"kubernetes.io/projected/5d24af8b-b86a-4604-82a5-e3d014dba7b5-kube-api-access-nbqtn\") pod \"cinder-api-0\" (UID: \"5d24af8b-b86a-4604-82a5-e3d014dba7b5\") " pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.076749 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:16:05 crc kubenswrapper[4979]: E0130 23:16:05.077391 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.094710 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c27922-b152-465f-b0fe-117e336c7ae0" path="/var/lib/kubelet/pods/e5c27922-b152-465f-b0fe-117e336c7ae0/volumes" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.171712 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.648742 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 23:16:05 crc kubenswrapper[4979]: I0130 23:16:05.755214 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d24af8b-b86a-4604-82a5-e3d014dba7b5","Type":"ContainerStarted","Data":"28c785f20f07dcb595b163436195aad2163f19866dc7189b0e361956b2a31eb2"} Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.299807 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.766550 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d24af8b-b86a-4604-82a5-e3d014dba7b5","Type":"ContainerStarted","Data":"f65da1aabfac67d5bb9a1702c858a2b56192f3de77040842ca98eda9fbb25ac9"} Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.833198 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.938116 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.938450 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.940078 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 23:16:06 crc kubenswrapper[4979]: I0130 23:16:06.951933 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.015918 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.017333 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.017416 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.020458 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.782188 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d24af8b-b86a-4604-82a5-e3d014dba7b5","Type":"ContainerStarted","Data":"97b5cf56db6d6e2ed3b3f27849b1b2885b68150eb1cb49d92ee77f5261216a27"} Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.783139 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.783188 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.793578 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 23:16:07 crc kubenswrapper[4979]: I0130 23:16:07.806292 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.806266501 podStartE2EDuration="3.806266501s" podCreationTimestamp="2026-01-30 23:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:16:07.803809265 +0000 UTC m=+5763.765056328" watchObservedRunningTime="2026-01-30 23:16:07.806266501 +0000 UTC m=+5763.767513564" Jan 30 23:16:09 crc kubenswrapper[4979]: I0130 23:16:09.615372 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 23:16:09 crc kubenswrapper[4979]: I0130 23:16:09.675323 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:09 crc kubenswrapper[4979]: I0130 23:16:09.797984 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="cinder-scheduler" containerID="cri-o://f34f6ceb54e852f4c063e802dbc7f5e8ca92abd2aab5ad9f8f928c8ae9b4ca33" gracePeriod=30 Jan 30 23:16:09 crc kubenswrapper[4979]: I0130 23:16:09.798129 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="probe" containerID="cri-o://11a0850d9a45d42789690dcbd834d11974f8f84ffa75d002d1ee278eca1fedce" gracePeriod=30 Jan 30 23:16:10 crc kubenswrapper[4979]: I0130 23:16:10.813509 4979 generic.go:334] "Generic (PLEG): container finished" podID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerID="11a0850d9a45d42789690dcbd834d11974f8f84ffa75d002d1ee278eca1fedce" exitCode=0 Jan 30 23:16:10 crc kubenswrapper[4979]: I0130 23:16:10.813577 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerDied","Data":"11a0850d9a45d42789690dcbd834d11974f8f84ffa75d002d1ee278eca1fedce"} Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.513258 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.821926 4979 generic.go:334] "Generic (PLEG): container finished" podID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerID="f34f6ceb54e852f4c063e802dbc7f5e8ca92abd2aab5ad9f8f928c8ae9b4ca33" exitCode=0 Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.821966 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerDied","Data":"f34f6ceb54e852f4c063e802dbc7f5e8ca92abd2aab5ad9f8f928c8ae9b4ca33"} Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.821991 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f2217ce-8d18-43fb-a08f-f39144f5aeed","Type":"ContainerDied","Data":"4426c069aa3a6213267e35e8b2382f441791df8002dd43fd176594900d983cdf"} Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.822000 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4426c069aa3a6213267e35e8b2382f441791df8002dd43fd176594900d983cdf" Jan 30 23:16:11 crc kubenswrapper[4979]: I0130 23:16:11.877991 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.075942 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.076118 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.076156 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.076208 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.076262 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.076296 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kb8g\" (UniqueName: \"kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g\") pod \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\" (UID: \"1f2217ce-8d18-43fb-a08f-f39144f5aeed\") " Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.077976 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.078181 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.085379 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g" (OuterVolumeSpecName: "kube-api-access-5kb8g") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "kube-api-access-5kb8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.091789 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.100974 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts" (OuterVolumeSpecName: "scripts") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.149233 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.179738 4979 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f2217ce-8d18-43fb-a08f-f39144f5aeed-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.179892 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.179942 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.179988 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kb8g\" (UniqueName: \"kubernetes.io/projected/1f2217ce-8d18-43fb-a08f-f39144f5aeed-kube-api-access-5kb8g\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.180014 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.208496 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data" (OuterVolumeSpecName: "config-data") pod "1f2217ce-8d18-43fb-a08f-f39144f5aeed" (UID: "1f2217ce-8d18-43fb-a08f-f39144f5aeed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.282257 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2217ce-8d18-43fb-a08f-f39144f5aeed-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.833872 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.896288 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.908005 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.935591 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:12 crc kubenswrapper[4979]: E0130 23:16:12.936735 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="probe" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.936766 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="probe" Jan 30 23:16:12 crc kubenswrapper[4979]: E0130 23:16:12.936830 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="cinder-scheduler" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.936843 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="cinder-scheduler" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.942737 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="cinder-scheduler" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.942850 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" containerName="probe" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.946124 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.949289 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.954222 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998004 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998128 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998172 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998456 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-scripts\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998616 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88f999da-53cb-4370-ab43-2a6623aa6d51-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:12 crc kubenswrapper[4979]: I0130 23:16:12.998659 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdz7f\" (UniqueName: \"kubernetes.io/projected/88f999da-53cb-4370-ab43-2a6623aa6d51-kube-api-access-jdz7f\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.079585 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2217ce-8d18-43fb-a08f-f39144f5aeed" path="/var/lib/kubelet/pods/1f2217ce-8d18-43fb-a08f-f39144f5aeed/volumes" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100010 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-scripts\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100095 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88f999da-53cb-4370-ab43-2a6623aa6d51-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100122 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdz7f\" (UniqueName: \"kubernetes.io/projected/88f999da-53cb-4370-ab43-2a6623aa6d51-kube-api-access-jdz7f\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100162 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100190 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.100794 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88f999da-53cb-4370-ab43-2a6623aa6d51-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.105765 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.106434 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-config-data\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.115460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-scripts\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.115715 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f999da-53cb-4370-ab43-2a6623aa6d51-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.118876 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdz7f\" (UniqueName: \"kubernetes.io/projected/88f999da-53cb-4370-ab43-2a6623aa6d51-kube-api-access-jdz7f\") pod \"cinder-scheduler-0\" (UID: \"88f999da-53cb-4370-ab43-2a6623aa6d51\") " pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.272674 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.722528 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 23:16:13 crc kubenswrapper[4979]: W0130 23:16:13.724334 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88f999da_53cb_4370_ab43_2a6623aa6d51.slice/crio-3d5c352b5f8bdb166d0e5769cc6186196e811850c849f9c0a6fe2488611b7eec WatchSource:0}: Error finding container 3d5c352b5f8bdb166d0e5769cc6186196e811850c849f9c0a6fe2488611b7eec: Status 404 returned error can't find the container with id 3d5c352b5f8bdb166d0e5769cc6186196e811850c849f9c0a6fe2488611b7eec Jan 30 23:16:13 crc kubenswrapper[4979]: I0130 23:16:13.849428 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"88f999da-53cb-4370-ab43-2a6623aa6d51","Type":"ContainerStarted","Data":"3d5c352b5f8bdb166d0e5769cc6186196e811850c849f9c0a6fe2488611b7eec"} Jan 30 23:16:14 crc kubenswrapper[4979]: I0130 23:16:14.861132 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"88f999da-53cb-4370-ab43-2a6623aa6d51","Type":"ContainerStarted","Data":"6eb202b5be229d12aa3a7c54c1aa6e094afb7f2bff5cf1919ed376f6d4bb60e9"} Jan 30 23:16:15 crc kubenswrapper[4979]: I0130 23:16:15.876960 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"88f999da-53cb-4370-ab43-2a6623aa6d51","Type":"ContainerStarted","Data":"d9dd9f3b7244fbd0a67377eaa8636714f3e3b386eb714247a6af41121aae1a0d"} Jan 30 23:16:15 crc kubenswrapper[4979]: I0130 23:16:15.908942 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.908927277 podStartE2EDuration="3.908927277s" podCreationTimestamp="2026-01-30 23:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:16:15.906579993 +0000 UTC m=+5771.867827026" watchObservedRunningTime="2026-01-30 23:16:15.908927277 +0000 UTC m=+5771.870174310" Jan 30 23:16:16 crc kubenswrapper[4979]: I0130 23:16:16.069827 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:16:16 crc kubenswrapper[4979]: E0130 23:16:16.070219 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:16:16 crc kubenswrapper[4979]: I0130 23:16:16.983106 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 23:16:18 crc kubenswrapper[4979]: I0130 23:16:18.273013 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 23:16:23 crc kubenswrapper[4979]: I0130 23:16:23.560491 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 23:16:31 crc kubenswrapper[4979]: I0130 23:16:31.070590 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:16:31 crc kubenswrapper[4979]: E0130 23:16:31.071694 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:16:43 crc kubenswrapper[4979]: I0130 23:16:43.070771 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:16:43 crc kubenswrapper[4979]: E0130 23:16:43.072550 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:16:57 crc kubenswrapper[4979]: I0130 23:16:57.071802 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:16:57 crc kubenswrapper[4979]: E0130 23:16:57.072651 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:17:08 crc kubenswrapper[4979]: I0130 23:17:08.069894 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:17:08 crc kubenswrapper[4979]: E0130 23:17:08.071217 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:17:23 crc kubenswrapper[4979]: I0130 23:17:23.070785 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:17:23 crc kubenswrapper[4979]: E0130 23:17:23.072243 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:17:34 crc kubenswrapper[4979]: I0130 23:17:34.070878 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:17:34 crc kubenswrapper[4979]: E0130 23:17:34.072631 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:17:49 crc kubenswrapper[4979]: I0130 23:17:49.070376 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:17:49 crc kubenswrapper[4979]: E0130 23:17:49.071305 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.196256 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kssd2"] Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.198271 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.199854 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-k69pj" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.201742 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.210272 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kssd2"] Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.244636 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-54q6d"] Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.257485 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.259311 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-54q6d"] Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.354792 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-lib\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.354937 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2524172b-c864-4a7f-8c66-ffd219fa7be6-scripts\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355070 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355087 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-log\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355265 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-log-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355301 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355572 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdn2v\" (UniqueName: \"kubernetes.io/projected/2524172b-c864-4a7f-8c66-ffd219fa7be6-kube-api-access-tdn2v\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355599 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-run\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-etc-ovs\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355890 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-scripts\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.355942 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngl98\" (UniqueName: \"kubernetes.io/projected/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-kube-api-access-ngl98\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457357 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457402 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-log\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457422 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-log-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457454 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdn2v\" (UniqueName: \"kubernetes.io/projected/2524172b-c864-4a7f-8c66-ffd219fa7be6-kube-api-access-tdn2v\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457497 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-run\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457519 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-etc-ovs\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457555 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-scripts\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457571 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngl98\" (UniqueName: \"kubernetes.io/projected/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-kube-api-access-ngl98\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457686 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-log\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457688 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457730 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-run-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457754 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-etc-ovs\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457776 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-lib\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457819 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-run\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457613 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-var-lib\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457873 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2524172b-c864-4a7f-8c66-ffd219fa7be6-var-log-ovn\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.457966 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2524172b-c864-4a7f-8c66-ffd219fa7be6-scripts\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.459727 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-scripts\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.460529 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2524172b-c864-4a7f-8c66-ffd219fa7be6-scripts\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.476085 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdn2v\" (UniqueName: \"kubernetes.io/projected/2524172b-c864-4a7f-8c66-ffd219fa7be6-kube-api-access-tdn2v\") pod \"ovn-controller-kssd2\" (UID: \"2524172b-c864-4a7f-8c66-ffd219fa7be6\") " pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.476766 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngl98\" (UniqueName: \"kubernetes.io/projected/5f8d6c92-62f8-427c-8208-cf3ba6d98af7-kube-api-access-ngl98\") pod \"ovn-controller-ovs-54q6d\" (UID: \"5f8d6c92-62f8-427c-8208-cf3ba6d98af7\") " pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.594614 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2" Jan 30 23:18:02 crc kubenswrapper[4979]: I0130 23:18:02.623487 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.070224 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:18:03 crc kubenswrapper[4979]: E0130 23:18:03.070858 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.083635 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kssd2"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.262298 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2" event={"ID":"2524172b-c864-4a7f-8c66-ffd219fa7be6","Type":"ContainerStarted","Data":"917cbd53d76ffde1b05d915beffa69779c72378b9b6bdbd5ae98c2e41c7bd228"} Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.500431 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-54q6d"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.790705 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-mp8qq"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.792641 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.809421 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-mp8qq"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.884181 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-56vn2"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.885689 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.887474 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.892248 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7glx\" (UniqueName: \"kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.892321 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.895085 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-56vn2"] Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994297 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggw2z\" (UniqueName: \"kubernetes.io/projected/927cfb5e-5147-4154-aad7-bd9d4aae47b2-kube-api-access-ggw2z\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994413 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927cfb5e-5147-4154-aad7-bd9d4aae47b2-config\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994669 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7glx\" (UniqueName: \"kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994746 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovn-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994772 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovs-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.994928 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:03 crc kubenswrapper[4979]: I0130 23:18:03.996256 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.014012 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7glx\" (UniqueName: \"kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx\") pod \"octavia-db-create-mp8qq\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096346 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggw2z\" (UniqueName: \"kubernetes.io/projected/927cfb5e-5147-4154-aad7-bd9d4aae47b2-kube-api-access-ggw2z\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096431 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927cfb5e-5147-4154-aad7-bd9d4aae47b2-config\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096488 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovn-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096512 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovs-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096845 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovs-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.096967 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/927cfb5e-5147-4154-aad7-bd9d4aae47b2-ovn-rundir\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.097160 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927cfb5e-5147-4154-aad7-bd9d4aae47b2-config\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.119044 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggw2z\" (UniqueName: \"kubernetes.io/projected/927cfb5e-5147-4154-aad7-bd9d4aae47b2-kube-api-access-ggw2z\") pod \"ovn-controller-metrics-56vn2\" (UID: \"927cfb5e-5147-4154-aad7-bd9d4aae47b2\") " pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.143256 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.206245 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-56vn2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.286972 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2" event={"ID":"2524172b-c864-4a7f-8c66-ffd219fa7be6","Type":"ContainerStarted","Data":"31c5020918aba8d092a19e0b7ca5bbaebba679cb38d8dc662a860dbe0e160ff3"} Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.287367 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kssd2" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.290547 4979 generic.go:334] "Generic (PLEG): container finished" podID="5f8d6c92-62f8-427c-8208-cf3ba6d98af7" containerID="b81c51bfa60ca7e89875d502bf09c62c926e61ce67e71e34d854b9859205ea7c" exitCode=0 Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.290589 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-54q6d" event={"ID":"5f8d6c92-62f8-427c-8208-cf3ba6d98af7","Type":"ContainerDied","Data":"b81c51bfa60ca7e89875d502bf09c62c926e61ce67e71e34d854b9859205ea7c"} Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.290612 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-54q6d" event={"ID":"5f8d6c92-62f8-427c-8208-cf3ba6d98af7","Type":"ContainerStarted","Data":"9f7defbe2e495cfe026a2c43f2ac9cad6d98d86a51e867fce732b4f6bd13016f"} Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.302368 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kssd2" podStartSLOduration=2.302347989 podStartE2EDuration="2.302347989s" podCreationTimestamp="2026-01-30 23:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:18:04.301398323 +0000 UTC m=+5880.262645356" watchObservedRunningTime="2026-01-30 23:18:04.302347989 +0000 UTC m=+5880.263595022" Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.626354 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-mp8qq"] Jan 30 23:18:04 crc kubenswrapper[4979]: I0130 23:18:04.707720 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-56vn2"] Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.271588 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-22c0-account-create-update-pwzqj"] Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.273594 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.276851 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.281178 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-22c0-account-create-update-pwzqj"] Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.302645 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-54q6d" event={"ID":"5f8d6c92-62f8-427c-8208-cf3ba6d98af7","Type":"ContainerStarted","Data":"d2639429e456fa5a2ff6cd80de0c22162641631c2df51f416d2d9994e6717acf"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.302688 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-54q6d" event={"ID":"5f8d6c92-62f8-427c-8208-cf3ba6d98af7","Type":"ContainerStarted","Data":"ef33eb93d9ee71c73bc2c045eb66e3e9eaddb1bdc39430e99ac2050a78298d92"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.302820 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.310637 4979 generic.go:334] "Generic (PLEG): container finished" podID="cad393e9-51ee-4f44-976c-fb9c28487d67" containerID="2901952f949f2b6e5bf0bdfc295d7dcb142b237e525207eca8287fadd9dc45a0" exitCode=0 Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.310693 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-mp8qq" event={"ID":"cad393e9-51ee-4f44-976c-fb9c28487d67","Type":"ContainerDied","Data":"2901952f949f2b6e5bf0bdfc295d7dcb142b237e525207eca8287fadd9dc45a0"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.310716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-mp8qq" event={"ID":"cad393e9-51ee-4f44-976c-fb9c28487d67","Type":"ContainerStarted","Data":"a568a2475cb9c1d659819c3ad91a11115dff4d6329bfcfa402ac75e51b1e7009"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.319675 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-56vn2" event={"ID":"927cfb5e-5147-4154-aad7-bd9d4aae47b2","Type":"ContainerStarted","Data":"8073cff7c73e8d5c03f0f19c3c46e337e58c1935232c0b7a368e8c519e538ab6"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.319725 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-56vn2" event={"ID":"927cfb5e-5147-4154-aad7-bd9d4aae47b2","Type":"ContainerStarted","Data":"275824f36c95e1b9064e3c9aa9149b5eb632f11db01ec1bdc9f820ee29b1dcd6"} Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.330016 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-54q6d" podStartSLOduration=3.329995517 podStartE2EDuration="3.329995517s" podCreationTimestamp="2026-01-30 23:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:18:05.326978175 +0000 UTC m=+5881.288225198" watchObservedRunningTime="2026-01-30 23:18:05.329995517 +0000 UTC m=+5881.291242550" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.358974 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-56vn2" podStartSLOduration=2.358952921 podStartE2EDuration="2.358952921s" podCreationTimestamp="2026-01-30 23:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:18:05.354368316 +0000 UTC m=+5881.315615349" watchObservedRunningTime="2026-01-30 23:18:05.358952921 +0000 UTC m=+5881.320199954" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.430017 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpqgn\" (UniqueName: \"kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.430130 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.531955 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpqgn\" (UniqueName: \"kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.532006 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.532912 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.550510 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpqgn\" (UniqueName: \"kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn\") pod \"octavia-22c0-account-create-update-pwzqj\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:05 crc kubenswrapper[4979]: I0130 23:18:05.588565 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.065996 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-22c0-account-create-update-pwzqj"] Jan 30 23:18:06 crc kubenswrapper[4979]: W0130 23:18:06.075260 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fa0fc85_dd34_469d_a6b4_500d9e17e8cd.slice/crio-38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209 WatchSource:0}: Error finding container 38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209: Status 404 returned error can't find the container with id 38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209 Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.330233 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-22c0-account-create-update-pwzqj" event={"ID":"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd","Type":"ContainerStarted","Data":"295a318396efe901097828f5812c2e83c8a8ea83df8ad7b1b542f03c853244c2"} Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.330508 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-22c0-account-create-update-pwzqj" event={"ID":"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd","Type":"ContainerStarted","Data":"38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209"} Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.330648 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.354519 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-22c0-account-create-update-pwzqj" podStartSLOduration=1.3545013799999999 podStartE2EDuration="1.35450138s" podCreationTimestamp="2026-01-30 23:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:18:06.345319231 +0000 UTC m=+5882.306566264" watchObservedRunningTime="2026-01-30 23:18:06.35450138 +0000 UTC m=+5882.315748403" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.690874 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.759120 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts\") pod \"cad393e9-51ee-4f44-976c-fb9c28487d67\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.759247 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7glx\" (UniqueName: \"kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx\") pod \"cad393e9-51ee-4f44-976c-fb9c28487d67\" (UID: \"cad393e9-51ee-4f44-976c-fb9c28487d67\") " Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.761180 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cad393e9-51ee-4f44-976c-fb9c28487d67" (UID: "cad393e9-51ee-4f44-976c-fb9c28487d67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.765797 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx" (OuterVolumeSpecName: "kube-api-access-f7glx") pod "cad393e9-51ee-4f44-976c-fb9c28487d67" (UID: "cad393e9-51ee-4f44-976c-fb9c28487d67"). InnerVolumeSpecName "kube-api-access-f7glx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.861349 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad393e9-51ee-4f44-976c-fb9c28487d67-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:06 crc kubenswrapper[4979]: I0130 23:18:06.861379 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7glx\" (UniqueName: \"kubernetes.io/projected/cad393e9-51ee-4f44-976c-fb9c28487d67-kube-api-access-f7glx\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:07 crc kubenswrapper[4979]: I0130 23:18:07.341370 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-mp8qq" Jan 30 23:18:07 crc kubenswrapper[4979]: I0130 23:18:07.341356 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-mp8qq" event={"ID":"cad393e9-51ee-4f44-976c-fb9c28487d67","Type":"ContainerDied","Data":"a568a2475cb9c1d659819c3ad91a11115dff4d6329bfcfa402ac75e51b1e7009"} Jan 30 23:18:07 crc kubenswrapper[4979]: I0130 23:18:07.341518 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a568a2475cb9c1d659819c3ad91a11115dff4d6329bfcfa402ac75e51b1e7009" Jan 30 23:18:07 crc kubenswrapper[4979]: I0130 23:18:07.343943 4979 generic.go:334] "Generic (PLEG): container finished" podID="3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" containerID="295a318396efe901097828f5812c2e83c8a8ea83df8ad7b1b542f03c853244c2" exitCode=0 Jan 30 23:18:07 crc kubenswrapper[4979]: I0130 23:18:07.343989 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-22c0-account-create-update-pwzqj" event={"ID":"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd","Type":"ContainerDied","Data":"295a318396efe901097828f5812c2e83c8a8ea83df8ad7b1b542f03c853244c2"} Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.733635 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.800226 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts\") pod \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.800369 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpqgn\" (UniqueName: \"kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn\") pod \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\" (UID: \"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd\") " Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.800832 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" (UID: "3fa0fc85-dd34-469d-a6b4-500d9e17e8cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.811394 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn" (OuterVolumeSpecName: "kube-api-access-wpqgn") pod "3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" (UID: "3fa0fc85-dd34-469d-a6b4-500d9e17e8cd"). InnerVolumeSpecName "kube-api-access-wpqgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.902406 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:08 crc kubenswrapper[4979]: I0130 23:18:08.902446 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpqgn\" (UniqueName: \"kubernetes.io/projected/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd-kube-api-access-wpqgn\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.048003 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fcp6h"] Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.055245 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e97b-account-create-update-7kkdr"] Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.064123 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e97b-account-create-update-7kkdr"] Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.079639 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1984c3-c561-48d8-8e99-a596088b25b7" path="/var/lib/kubelet/pods/cd1984c3-c561-48d8-8e99-a596088b25b7/volumes" Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.080267 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fcp6h"] Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.361081 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-22c0-account-create-update-pwzqj" event={"ID":"3fa0fc85-dd34-469d-a6b4-500d9e17e8cd","Type":"ContainerDied","Data":"38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209"} Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.361407 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38e6156a99552726ad4ec8847bba425db94a2ea9fb26a1d5aab3e30300f1e209" Jan 30 23:18:09 crc kubenswrapper[4979]: I0130 23:18:09.361297 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-22c0-account-create-update-pwzqj" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.087931 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="244815ff-89c6-49ac-91e1-4d8f44de6066" path="/var/lib/kubelet/pods/244815ff-89c6-49ac-91e1-4d8f44de6066/volumes" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.692454 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-vn66f"] Jan 30 23:18:11 crc kubenswrapper[4979]: E0130 23:18:11.692957 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad393e9-51ee-4f44-976c-fb9c28487d67" containerName="mariadb-database-create" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.692983 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad393e9-51ee-4f44-976c-fb9c28487d67" containerName="mariadb-database-create" Jan 30 23:18:11 crc kubenswrapper[4979]: E0130 23:18:11.693026 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" containerName="mariadb-account-create-update" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.693058 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" containerName="mariadb-account-create-update" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.693301 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" containerName="mariadb-account-create-update" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.693329 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cad393e9-51ee-4f44-976c-fb9c28487d67" containerName="mariadb-database-create" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.694239 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.703981 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-vn66f"] Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.759808 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gw6h\" (UniqueName: \"kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.759935 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.878788 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gw6h\" (UniqueName: \"kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.879115 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.881140 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:11 crc kubenswrapper[4979]: I0130 23:18:11.899050 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gw6h\" (UniqueName: \"kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h\") pod \"octavia-persistence-db-create-vn66f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.029022 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.169934 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.174810 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.185856 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.288534 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4d5b\" (UniqueName: \"kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.288601 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.288918 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.391155 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4d5b\" (UniqueName: \"kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.391205 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.391253 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.391836 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.391891 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.407636 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4d5b\" (UniqueName: \"kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b\") pod \"community-operators-h7k4g\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.497497 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.540233 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-vn66f"] Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.839408 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-ff98-account-create-update-szcww"] Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.842057 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.844716 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.852364 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ff98-account-create-update-szcww"] Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.901281 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwldh\" (UniqueName: \"kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:12 crc kubenswrapper[4979]: I0130 23:18:12.901403 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.003302 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwldh\" (UniqueName: \"kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.003420 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.004275 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.013685 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.021935 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwldh\" (UniqueName: \"kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh\") pod \"octavia-ff98-account-create-update-szcww\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.170775 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.410331 4979 generic.go:334] "Generic (PLEG): container finished" podID="5d8f6093-1ce3-4cb4-829a-71a3aaded46f" containerID="ab9d6fd9b6c78c1609831430497301a395dbc97dc2a1cc5b8ce36db173127e64" exitCode=0 Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.410417 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-vn66f" event={"ID":"5d8f6093-1ce3-4cb4-829a-71a3aaded46f","Type":"ContainerDied","Data":"ab9d6fd9b6c78c1609831430497301a395dbc97dc2a1cc5b8ce36db173127e64"} Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.410637 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-vn66f" event={"ID":"5d8f6093-1ce3-4cb4-829a-71a3aaded46f","Type":"ContainerStarted","Data":"3fb03c5a5aa72ebda057704e5eb39535a99c2e8e757bfc8a37d44531abbcfa6f"} Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.413646 4979 generic.go:334] "Generic (PLEG): container finished" podID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerID="6d616084358c968a0cef1f0dabd45c508a2b560e879d97f236802954ea33a0fb" exitCode=0 Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.413875 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerDied","Data":"6d616084358c968a0cef1f0dabd45c508a2b560e879d97f236802954ea33a0fb"} Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.413921 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerStarted","Data":"4aed268deef5dedad8a08efebfaa72dcd92b76e589a8cdf33b18ee34f3580454"} Jan 30 23:18:13 crc kubenswrapper[4979]: I0130 23:18:13.677906 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ff98-account-create-update-szcww"] Jan 30 23:18:13 crc kubenswrapper[4979]: W0130 23:18:13.685074 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9549a4c7_2fb8_4f18_a7d3_902949e90d8c.slice/crio-9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b WatchSource:0}: Error finding container 9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b: Status 404 returned error can't find the container with id 9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.071007 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:18:14 crc kubenswrapper[4979]: E0130 23:18:14.071571 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.423973 4979 generic.go:334] "Generic (PLEG): container finished" podID="9549a4c7-2fb8-4f18-a7d3-902949e90d8c" containerID="4585a42ea864cc4af87b4f754b0c7b9540e84f1af59fb62e004a04f42ca82ee5" exitCode=0 Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.424198 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ff98-account-create-update-szcww" event={"ID":"9549a4c7-2fb8-4f18-a7d3-902949e90d8c","Type":"ContainerDied","Data":"4585a42ea864cc4af87b4f754b0c7b9540e84f1af59fb62e004a04f42ca82ee5"} Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.424242 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ff98-account-create-update-szcww" event={"ID":"9549a4c7-2fb8-4f18-a7d3-902949e90d8c","Type":"ContainerStarted","Data":"9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b"} Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.428213 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerStarted","Data":"3b00eb9ee91e5cceca42bd097eed4bd052eb58360e238fab642dfd713f43b667"} Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.832249 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.944398 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts\") pod \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.944618 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gw6h\" (UniqueName: \"kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h\") pod \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\" (UID: \"5d8f6093-1ce3-4cb4-829a-71a3aaded46f\") " Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.945303 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d8f6093-1ce3-4cb4-829a-71a3aaded46f" (UID: "5d8f6093-1ce3-4cb4-829a-71a3aaded46f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:14 crc kubenswrapper[4979]: I0130 23:18:14.957571 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h" (OuterVolumeSpecName: "kube-api-access-2gw6h") pod "5d8f6093-1ce3-4cb4-829a-71a3aaded46f" (UID: "5d8f6093-1ce3-4cb4-829a-71a3aaded46f"). InnerVolumeSpecName "kube-api-access-2gw6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.031544 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-9lbrp"] Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.044417 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-9lbrp"] Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.047299 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gw6h\" (UniqueName: \"kubernetes.io/projected/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-kube-api-access-2gw6h\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.047351 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d8f6093-1ce3-4cb4-829a-71a3aaded46f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.090761 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e90fa06-119c-454e-9f4e-da0b5bff99bb" path="/var/lib/kubelet/pods/7e90fa06-119c-454e-9f4e-da0b5bff99bb/volumes" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.457074 4979 generic.go:334] "Generic (PLEG): container finished" podID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerID="3b00eb9ee91e5cceca42bd097eed4bd052eb58360e238fab642dfd713f43b667" exitCode=0 Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.457239 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerDied","Data":"3b00eb9ee91e5cceca42bd097eed4bd052eb58360e238fab642dfd713f43b667"} Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.462172 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-vn66f" event={"ID":"5d8f6093-1ce3-4cb4-829a-71a3aaded46f","Type":"ContainerDied","Data":"3fb03c5a5aa72ebda057704e5eb39535a99c2e8e757bfc8a37d44531abbcfa6f"} Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.462242 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb03c5a5aa72ebda057704e5eb39535a99c2e8e757bfc8a37d44531abbcfa6f" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.462195 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-vn66f" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.900704 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.964819 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwldh\" (UniqueName: \"kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh\") pod \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.965369 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts\") pod \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\" (UID: \"9549a4c7-2fb8-4f18-a7d3-902949e90d8c\") " Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.966480 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9549a4c7-2fb8-4f18-a7d3-902949e90d8c" (UID: "9549a4c7-2fb8-4f18-a7d3-902949e90d8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:15 crc kubenswrapper[4979]: I0130 23:18:15.970089 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh" (OuterVolumeSpecName: "kube-api-access-zwldh") pod "9549a4c7-2fb8-4f18-a7d3-902949e90d8c" (UID: "9549a4c7-2fb8-4f18-a7d3-902949e90d8c"). InnerVolumeSpecName "kube-api-access-zwldh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.067665 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.067696 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwldh\" (UniqueName: \"kubernetes.io/projected/9549a4c7-2fb8-4f18-a7d3-902949e90d8c-kube-api-access-zwldh\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.470226 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ff98-account-create-update-szcww" event={"ID":"9549a4c7-2fb8-4f18-a7d3-902949e90d8c","Type":"ContainerDied","Data":"9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b"} Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.470261 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ff98-account-create-update-szcww" Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.470268 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9893a3a140b643f96a7284f2a06cece6cd577d35ef42ed5f79cd3dec79d9042b" Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.472131 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerStarted","Data":"5e05771ad840b11b9c7adb62481a88af69b5d4841c0c944b2ad551c3d6113e34"} Jan 30 23:18:16 crc kubenswrapper[4979]: I0130 23:18:16.491508 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h7k4g" podStartSLOduration=2.020111714 podStartE2EDuration="4.491489543s" podCreationTimestamp="2026-01-30 23:18:12 +0000 UTC" firstStartedPulling="2026-01-30 23:18:13.415738484 +0000 UTC m=+5889.376985517" lastFinishedPulling="2026-01-30 23:18:15.887116293 +0000 UTC m=+5891.848363346" observedRunningTime="2026-01-30 23:18:16.488743779 +0000 UTC m=+5892.449990812" watchObservedRunningTime="2026-01-30 23:18:16.491489543 +0000 UTC m=+5892.452736576" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.448107 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-657b9576cf-gswsb"] Jan 30 23:18:18 crc kubenswrapper[4979]: E0130 23:18:18.448870 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8f6093-1ce3-4cb4-829a-71a3aaded46f" containerName="mariadb-database-create" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.448888 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8f6093-1ce3-4cb4-829a-71a3aaded46f" containerName="mariadb-database-create" Jan 30 23:18:18 crc kubenswrapper[4979]: E0130 23:18:18.448920 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9549a4c7-2fb8-4f18-a7d3-902949e90d8c" containerName="mariadb-account-create-update" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.448927 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9549a4c7-2fb8-4f18-a7d3-902949e90d8c" containerName="mariadb-account-create-update" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.449150 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9549a4c7-2fb8-4f18-a7d3-902949e90d8c" containerName="mariadb-account-create-update" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.449165 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8f6093-1ce3-4cb4-829a-71a3aaded46f" containerName="mariadb-database-create" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.450612 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.453090 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.453477 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-9f6bb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.453850 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.459576 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-657b9576cf-gswsb"] Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.520085 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-octavia-run\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.520421 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.520447 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data-merged\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.520517 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-scripts\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.520552 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-combined-ca-bundle\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.621704 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-scripts\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.621760 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-combined-ca-bundle\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.621849 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-octavia-run\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.621878 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.621896 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data-merged\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.622371 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data-merged\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.622647 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/bc255f37-2650-4c57-b4d0-4709be5a5d25-octavia-run\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.629958 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-combined-ca-bundle\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.629968 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-scripts\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.630634 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc255f37-2650-4c57-b4d0-4709be5a5d25-config-data\") pod \"octavia-api-657b9576cf-gswsb\" (UID: \"bc255f37-2650-4c57-b4d0-4709be5a5d25\") " pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:18 crc kubenswrapper[4979]: I0130 23:18:18.775309 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:19 crc kubenswrapper[4979]: I0130 23:18:19.373241 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-657b9576cf-gswsb"] Jan 30 23:18:19 crc kubenswrapper[4979]: I0130 23:18:19.496527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-657b9576cf-gswsb" event={"ID":"bc255f37-2650-4c57-b4d0-4709be5a5d25","Type":"ContainerStarted","Data":"b14bfe7a25a727f299de9143fecc9a61989a8fc979a15a572320f6b95cdd47d8"} Jan 30 23:18:22 crc kubenswrapper[4979]: I0130 23:18:22.498182 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:22 crc kubenswrapper[4979]: I0130 23:18:22.498574 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:22 crc kubenswrapper[4979]: I0130 23:18:22.552358 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:22 crc kubenswrapper[4979]: I0130 23:18:22.600451 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:22 crc kubenswrapper[4979]: I0130 23:18:22.793946 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:24 crc kubenswrapper[4979]: I0130 23:18:24.547597 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h7k4g" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="registry-server" containerID="cri-o://5e05771ad840b11b9c7adb62481a88af69b5d4841c0c944b2ad551c3d6113e34" gracePeriod=2 Jan 30 23:18:25 crc kubenswrapper[4979]: I0130 23:18:25.571290 4979 generic.go:334] "Generic (PLEG): container finished" podID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerID="5e05771ad840b11b9c7adb62481a88af69b5d4841c0c944b2ad551c3d6113e34" exitCode=0 Jan 30 23:18:25 crc kubenswrapper[4979]: I0130 23:18:25.571351 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerDied","Data":"5e05771ad840b11b9c7adb62481a88af69b5d4841c0c944b2ad551c3d6113e34"} Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.318025 4979 scope.go:117] "RemoveContainer" containerID="958f1b82a7938a7c0d27709d282569c0aab4b64a07e68b1bb769e01caed93449" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.666372 4979 scope.go:117] "RemoveContainer" containerID="ce550ab1c6e408aea10d06173b7920d5c55fe0078943da671c3598da2665ca61" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.774006 4979 scope.go:117] "RemoveContainer" containerID="14e6d9a35e66da497f5366e01530325f2e7b1996be432a046623a1284c656b4d" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.829466 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.892112 4979 scope.go:117] "RemoveContainer" containerID="519cd3d78305849e3e5a18a0d4ee7c2c5e0a82f36ae21f2f29ad0865227dc983" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.910404 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities\") pod \"978132d6-bbdd-4d38-b69d-8713bafb726b\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.910997 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4d5b\" (UniqueName: \"kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b\") pod \"978132d6-bbdd-4d38-b69d-8713bafb726b\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.911087 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content\") pod \"978132d6-bbdd-4d38-b69d-8713bafb726b\" (UID: \"978132d6-bbdd-4d38-b69d-8713bafb726b\") " Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.911755 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities" (OuterVolumeSpecName: "utilities") pod "978132d6-bbdd-4d38-b69d-8713bafb726b" (UID: "978132d6-bbdd-4d38-b69d-8713bafb726b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.931390 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b" (OuterVolumeSpecName: "kube-api-access-c4d5b") pod "978132d6-bbdd-4d38-b69d-8713bafb726b" (UID: "978132d6-bbdd-4d38-b69d-8713bafb726b"). InnerVolumeSpecName "kube-api-access-c4d5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.948654 4979 scope.go:117] "RemoveContainer" containerID="949025542d878f0aec57178ae4767449919585cb47ec404495f570b3fe0d8899" Jan 30 23:18:27 crc kubenswrapper[4979]: I0130 23:18:27.972904 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "978132d6-bbdd-4d38-b69d-8713bafb726b" (UID: "978132d6-bbdd-4d38-b69d-8713bafb726b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.013915 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4d5b\" (UniqueName: \"kubernetes.io/projected/978132d6-bbdd-4d38-b69d-8713bafb726b-kube-api-access-c4d5b\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.013973 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.013994 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/978132d6-bbdd-4d38-b69d-8713bafb726b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.609231 4979 generic.go:334] "Generic (PLEG): container finished" podID="bc255f37-2650-4c57-b4d0-4709be5a5d25" containerID="9d0806b314d8bcb1261b5c6e83a0b50664719486330b42e0947e70be649acf43" exitCode=0 Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.609294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-657b9576cf-gswsb" event={"ID":"bc255f37-2650-4c57-b4d0-4709be5a5d25","Type":"ContainerDied","Data":"9d0806b314d8bcb1261b5c6e83a0b50664719486330b42e0947e70be649acf43"} Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.615990 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7k4g" event={"ID":"978132d6-bbdd-4d38-b69d-8713bafb726b","Type":"ContainerDied","Data":"4aed268deef5dedad8a08efebfaa72dcd92b76e589a8cdf33b18ee34f3580454"} Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.616072 4979 scope.go:117] "RemoveContainer" containerID="5e05771ad840b11b9c7adb62481a88af69b5d4841c0c944b2ad551c3d6113e34" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.616151 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7k4g" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.637635 4979 scope.go:117] "RemoveContainer" containerID="3b00eb9ee91e5cceca42bd097eed4bd052eb58360e238fab642dfd713f43b667" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.671707 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.681818 4979 scope.go:117] "RemoveContainer" containerID="6d616084358c968a0cef1f0dabd45c508a2b560e879d97f236802954ea33a0fb" Jan 30 23:18:28 crc kubenswrapper[4979]: I0130 23:18:28.683105 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h7k4g"] Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.062868 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-n2mf2"] Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.078858 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:18:29 crc kubenswrapper[4979]: E0130 23:18:29.081142 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.103526 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" path="/var/lib/kubelet/pods/978132d6-bbdd-4d38-b69d-8713bafb726b/volumes" Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.104512 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-n2mf2"] Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.628602 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-657b9576cf-gswsb" event={"ID":"bc255f37-2650-4c57-b4d0-4709be5a5d25","Type":"ContainerStarted","Data":"da34fa743d281aaa91d004c0354f1bd32f61a83ed05262f3a63a8ccacc65f81f"} Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.629011 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-657b9576cf-gswsb" event={"ID":"bc255f37-2650-4c57-b4d0-4709be5a5d25","Type":"ContainerStarted","Data":"2129a5ba5ce7897b561d4a67a7dc62dda75bf684cb386fbbd14c2252d29885db"} Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.629057 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.629071 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:29 crc kubenswrapper[4979]: I0130 23:18:29.649428 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-657b9576cf-gswsb" podStartSLOduration=3.315548708 podStartE2EDuration="11.649405561s" podCreationTimestamp="2026-01-30 23:18:18 +0000 UTC" firstStartedPulling="2026-01-30 23:18:19.381852044 +0000 UTC m=+5895.343099087" lastFinishedPulling="2026-01-30 23:18:27.715708877 +0000 UTC m=+5903.676955940" observedRunningTime="2026-01-30 23:18:29.64531915 +0000 UTC m=+5905.606566183" watchObservedRunningTime="2026-01-30 23:18:29.649405561 +0000 UTC m=+5905.610652594" Jan 30 23:18:31 crc kubenswrapper[4979]: I0130 23:18:31.093789 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5aa2c0-69c0-486f-8bf7-0f7539935f2e" path="/var/lib/kubelet/pods/2d5aa2c0-69c0-486f-8bf7-0f7539935f2e/volumes" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.632898 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kssd2" podUID="2524172b-c864-4a7f-8c66-ffd219fa7be6" containerName="ovn-controller" probeResult="failure" output=< Jan 30 23:18:37 crc kubenswrapper[4979]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 23:18:37 crc kubenswrapper[4979]: > Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.666054 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.672532 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-54q6d" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.711707 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.812754 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kssd2-config-khdnr"] Jan 30 23:18:37 crc kubenswrapper[4979]: E0130 23:18:37.816881 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="extract-content" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.816920 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="extract-content" Jan 30 23:18:37 crc kubenswrapper[4979]: E0130 23:18:37.817003 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="registry-server" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.817017 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="registry-server" Jan 30 23:18:37 crc kubenswrapper[4979]: E0130 23:18:37.817077 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="extract-utilities" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.817088 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="extract-utilities" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.817803 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="978132d6-bbdd-4d38-b69d-8713bafb726b" containerName="registry-server" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.822824 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.828020 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.839310 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kssd2-config-khdnr"] Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.950193 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.951915 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.951971 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.952027 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whvz\" (UniqueName: \"kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.952106 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:37 crc kubenswrapper[4979]: I0130 23:18:37.952134 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053234 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053317 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053431 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053478 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053522 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.053586 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5whvz\" (UniqueName: \"kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.054630 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.054863 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.054904 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.054945 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.055726 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.075431 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5whvz\" (UniqueName: \"kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz\") pod \"ovn-controller-kssd2-config-khdnr\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.152419 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:38 crc kubenswrapper[4979]: I0130 23:18:38.718060 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kssd2-config-khdnr"] Jan 30 23:18:39 crc kubenswrapper[4979]: I0130 23:18:39.785864 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2-config-khdnr" event={"ID":"9a7e245e-175c-4fb3-b0de-b3d99a33548c","Type":"ContainerStarted","Data":"33793d66c62b82fadedf876d0612a42979bc1f8ad6fccd554e52bcadc661b6fd"} Jan 30 23:18:39 crc kubenswrapper[4979]: I0130 23:18:39.787146 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2-config-khdnr" event={"ID":"9a7e245e-175c-4fb3-b0de-b3d99a33548c","Type":"ContainerStarted","Data":"17b815f61ca5cd4ba6b905cd4cd028bc1fba77ac29df1fa57a4af74954b44888"} Jan 30 23:18:39 crc kubenswrapper[4979]: I0130 23:18:39.812440 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kssd2-config-khdnr" podStartSLOduration=2.8124176 podStartE2EDuration="2.8124176s" podCreationTimestamp="2026-01-30 23:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:18:39.804759642 +0000 UTC m=+5915.766006685" watchObservedRunningTime="2026-01-30 23:18:39.8124176 +0000 UTC m=+5915.773664633" Jan 30 23:18:40 crc kubenswrapper[4979]: I0130 23:18:40.798921 4979 generic.go:334] "Generic (PLEG): container finished" podID="9a7e245e-175c-4fb3-b0de-b3d99a33548c" containerID="33793d66c62b82fadedf876d0612a42979bc1f8ad6fccd554e52bcadc661b6fd" exitCode=0 Jan 30 23:18:40 crc kubenswrapper[4979]: I0130 23:18:40.799354 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2-config-khdnr" event={"ID":"9a7e245e-175c-4fb3-b0de-b3d99a33548c","Type":"ContainerDied","Data":"33793d66c62b82fadedf876d0612a42979bc1f8ad6fccd554e52bcadc661b6fd"} Jan 30 23:18:41 crc kubenswrapper[4979]: I0130 23:18:41.701586 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-657b9576cf-gswsb" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.136269 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251606 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251731 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251778 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251822 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251853 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251866 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251913 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5whvz\" (UniqueName: \"kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz\") pod \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\" (UID: \"9a7e245e-175c-4fb3-b0de-b3d99a33548c\") " Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.251939 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252104 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run" (OuterVolumeSpecName: "var-run") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252404 4979 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252422 4979 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252431 4979 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a7e245e-175c-4fb3-b0de-b3d99a33548c-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252462 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.252656 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts" (OuterVolumeSpecName: "scripts") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.257423 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz" (OuterVolumeSpecName: "kube-api-access-5whvz") pod "9a7e245e-175c-4fb3-b0de-b3d99a33548c" (UID: "9a7e245e-175c-4fb3-b0de-b3d99a33548c"). InnerVolumeSpecName "kube-api-access-5whvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.353896 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5whvz\" (UniqueName: \"kubernetes.io/projected/9a7e245e-175c-4fb3-b0de-b3d99a33548c-kube-api-access-5whvz\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.353938 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.353947 4979 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7e245e-175c-4fb3-b0de-b3d99a33548c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.636646 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kssd2" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.820235 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kssd2-config-khdnr" event={"ID":"9a7e245e-175c-4fb3-b0de-b3d99a33548c","Type":"ContainerDied","Data":"17b815f61ca5cd4ba6b905cd4cd028bc1fba77ac29df1fa57a4af74954b44888"} Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.820278 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17b815f61ca5cd4ba6b905cd4cd028bc1fba77ac29df1fa57a4af74954b44888" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.820300 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kssd2-config-khdnr" Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.891147 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kssd2-config-khdnr"] Jan 30 23:18:42 crc kubenswrapper[4979]: I0130 23:18:42.901311 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kssd2-config-khdnr"] Jan 30 23:18:43 crc kubenswrapper[4979]: I0130 23:18:43.069930 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:18:43 crc kubenswrapper[4979]: E0130 23:18:43.070430 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:43 crc kubenswrapper[4979]: I0130 23:18:43.081314 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a7e245e-175c-4fb3-b0de-b3d99a33548c" path="/var/lib/kubelet/pods/9a7e245e-175c-4fb3-b0de-b3d99a33548c/volumes" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.627196 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-p7ttv"] Jan 30 23:18:50 crc kubenswrapper[4979]: E0130 23:18:50.628092 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7e245e-175c-4fb3-b0de-b3d99a33548c" containerName="ovn-config" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.628105 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7e245e-175c-4fb3-b0de-b3d99a33548c" containerName="ovn-config" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.628478 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7e245e-175c-4fb3-b0de-b3d99a33548c" containerName="ovn-config" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.629520 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.639421 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.639632 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.639818 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.644306 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-p7ttv"] Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.674168 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-scripts\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.674275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.674311 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e59aa6da-4048-4cf0-add7-cb98472425cb-hm-ports\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.674385 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data-merged\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.776377 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data-merged\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.776481 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-scripts\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.776544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.776570 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e59aa6da-4048-4cf0-add7-cb98472425cb-hm-ports\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.777070 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data-merged\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.777399 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e59aa6da-4048-4cf0-add7-cb98472425cb-hm-ports\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.782188 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-scripts\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.795333 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59aa6da-4048-4cf0-add7-cb98472425cb-config-data\") pod \"octavia-rsyslog-p7ttv\" (UID: \"e59aa6da-4048-4cf0-add7-cb98472425cb\") " pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:50 crc kubenswrapper[4979]: I0130 23:18:50.960002 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.201474 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.203852 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.209825 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.214775 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.287841 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.288133 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.389601 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.389954 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.390396 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.394924 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config\") pod \"octavia-image-upload-59f8cff499-8q9t8\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.531841 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.570094 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-p7ttv"] Jan 30 23:18:51 crc kubenswrapper[4979]: W0130 23:18:51.587375 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode59aa6da_4048_4cf0_add7_cb98472425cb.slice/crio-bcfa629fe8d96b43ba7770a96879c9ab8bb9ae19827317689eeb0153575bef31 WatchSource:0}: Error finding container bcfa629fe8d96b43ba7770a96879c9ab8bb9ae19827317689eeb0153575bef31: Status 404 returned error can't find the container with id bcfa629fe8d96b43ba7770a96879c9ab8bb9ae19827317689eeb0153575bef31 Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.703782 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-p7ttv"] Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.911952 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-p7ttv" event={"ID":"e59aa6da-4048-4cf0-add7-cb98472425cb","Type":"ContainerStarted","Data":"bcfa629fe8d96b43ba7770a96879c9ab8bb9ae19827317689eeb0153575bef31"} Jan 30 23:18:51 crc kubenswrapper[4979]: I0130 23:18:51.980885 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:18:51 crc kubenswrapper[4979]: W0130 23:18:51.985384 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56781d53_1264_465c_bee8_378a284703f7.slice/crio-c48864e7f767bd9ad4a6acb488259616c35f707f9732bb49d0ec4dd7d49fb517 WatchSource:0}: Error finding container c48864e7f767bd9ad4a6acb488259616c35f707f9732bb49d0ec4dd7d49fb517: Status 404 returned error can't find the container with id c48864e7f767bd9ad4a6acb488259616c35f707f9732bb49d0ec4dd7d49fb517 Jan 30 23:18:52 crc kubenswrapper[4979]: I0130 23:18:52.925156 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" event={"ID":"56781d53-1264-465c-bee8-378a284703f7","Type":"ContainerStarted","Data":"c48864e7f767bd9ad4a6acb488259616c35f707f9732bb49d0ec4dd7d49fb517"} Jan 30 23:18:53 crc kubenswrapper[4979]: I0130 23:18:53.936253 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-p7ttv" event={"ID":"e59aa6da-4048-4cf0-add7-cb98472425cb","Type":"ContainerStarted","Data":"f1033d956b8cb5eb27d9bfcbb895ec0576f18b1da54b2ef1eb908919ff379383"} Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.654777 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-pbxbw"] Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.656660 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.661710 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.661762 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.662105 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.665680 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-pbxbw"] Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808309 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808389 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e7a38a33-332b-484f-a620-5ecc2b52d9d8-hm-ports\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808513 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data-merged\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808595 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-scripts\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808632 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-combined-ca-bundle\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.808686 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-amphora-certs\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911019 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data-merged\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911155 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-scripts\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911189 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-combined-ca-bundle\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911242 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-amphora-certs\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911309 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911351 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e7a38a33-332b-484f-a620-5ecc2b52d9d8-hm-ports\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.911776 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data-merged\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.912647 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e7a38a33-332b-484f-a620-5ecc2b52d9d8-hm-ports\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.919854 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-amphora-certs\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.923654 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-combined-ca-bundle\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.924000 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-scripts\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.925113 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a38a33-332b-484f-a620-5ecc2b52d9d8-config-data\") pod \"octavia-healthmanager-pbxbw\" (UID: \"e7a38a33-332b-484f-a620-5ecc2b52d9d8\") " pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.959287 4979 generic.go:334] "Generic (PLEG): container finished" podID="e59aa6da-4048-4cf0-add7-cb98472425cb" containerID="f1033d956b8cb5eb27d9bfcbb895ec0576f18b1da54b2ef1eb908919ff379383" exitCode=0 Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.959334 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-p7ttv" event={"ID":"e59aa6da-4048-4cf0-add7-cb98472425cb","Type":"ContainerDied","Data":"f1033d956b8cb5eb27d9bfcbb895ec0576f18b1da54b2ef1eb908919ff379383"} Jan 30 23:18:55 crc kubenswrapper[4979]: I0130 23:18:55.973204 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.069915 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:18:56 crc kubenswrapper[4979]: E0130 23:18:56.070173 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.559128 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-pbxbw"] Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.659004 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-4bcmq"] Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.660914 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.663526 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.672131 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-4bcmq"] Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.730527 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.730582 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.730608 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.730922 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.832602 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.832670 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.833104 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.833189 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.833352 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.840056 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.840460 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.840608 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data\") pod \"octavia-db-sync-4bcmq\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.973770 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-pbxbw" event={"ID":"e7a38a33-332b-484f-a620-5ecc2b52d9d8","Type":"ContainerStarted","Data":"7314c3a10696c0162279a53d523ca81e2dc33745775732139a4f557e7214ce0f"} Jan 30 23:18:56 crc kubenswrapper[4979]: I0130 23:18:56.995478 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.290860 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-89w6g"] Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.295963 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.299747 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.304606 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-89w6g"] Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.304662 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.446366 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data-merged\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.446430 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-combined-ca-bundle\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.446467 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-scripts\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.446527 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-amphora-certs\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.446563 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/82154ec9-1201-41a2-a0f2-904b2db3c497-hm-ports\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.447152 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553387 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553756 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data-merged\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553795 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-combined-ca-bundle\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553824 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-scripts\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553881 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-amphora-certs\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.553917 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/82154ec9-1201-41a2-a0f2-904b2db3c497-hm-ports\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.563563 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data-merged\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.572082 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/82154ec9-1201-41a2-a0f2-904b2db3c497-hm-ports\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.582892 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-combined-ca-bundle\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.583816 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-amphora-certs\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.583908 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-scripts\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.584873 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82154ec9-1201-41a2-a0f2-904b2db3c497-config-data\") pod \"octavia-housekeeping-89w6g\" (UID: \"82154ec9-1201-41a2-a0f2-904b2db3c497\") " pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.614495 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.922200 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-4bcmq"] Jan 30 23:18:57 crc kubenswrapper[4979]: I0130 23:18:57.989932 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-pbxbw" event={"ID":"e7a38a33-332b-484f-a620-5ecc2b52d9d8","Type":"ContainerStarted","Data":"5253fb3f28e3f279aeaa5586df71c111f0fa1f5d0fc7f40bb780f79332a13f31"} Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.041912 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-m8s2f"] Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.043678 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.046286 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.046507 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.065937 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-m8s2f"] Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168321 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-amphora-certs\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168397 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data-merged\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168657 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/81ae9dc0-5b82-4990-878a-9570fc849c26-hm-ports\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168783 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168895 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-scripts\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.168939 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-combined-ca-bundle\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271130 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/81ae9dc0-5b82-4990-878a-9570fc849c26-hm-ports\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271209 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271254 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-scripts\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271279 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-combined-ca-bundle\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271359 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-amphora-certs\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271386 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data-merged\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.271861 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data-merged\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.272764 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/81ae9dc0-5b82-4990-878a-9570fc849c26-hm-ports\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.278455 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-config-data\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.278552 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-amphora-certs\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.278838 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-scripts\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.279929 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ae9dc0-5b82-4990-878a-9570fc849c26-combined-ca-bundle\") pod \"octavia-worker-m8s2f\" (UID: \"81ae9dc0-5b82-4990-878a-9570fc849c26\") " pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:58 crc kubenswrapper[4979]: I0130 23:18:58.380346 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-m8s2f" Jan 30 23:18:59 crc kubenswrapper[4979]: I0130 23:18:59.001436 4979 generic.go:334] "Generic (PLEG): container finished" podID="e7a38a33-332b-484f-a620-5ecc2b52d9d8" containerID="5253fb3f28e3f279aeaa5586df71c111f0fa1f5d0fc7f40bb780f79332a13f31" exitCode=0 Jan 30 23:18:59 crc kubenswrapper[4979]: I0130 23:18:59.001481 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-pbxbw" event={"ID":"e7a38a33-332b-484f-a620-5ecc2b52d9d8","Type":"ContainerDied","Data":"5253fb3f28e3f279aeaa5586df71c111f0fa1f5d0fc7f40bb780f79332a13f31"} Jan 30 23:18:59 crc kubenswrapper[4979]: I0130 23:18:59.810148 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-pbxbw"] Jan 30 23:19:05 crc kubenswrapper[4979]: I0130 23:19:05.061243 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4bcmq" event={"ID":"b39f85e7-5ff3-4843-87ca-0eaa482d5107","Type":"ContainerStarted","Data":"77ea6560b514db80edcd2b1a784559cc0b05150e8bf7ea65bca5ae3812975520"} Jan 30 23:19:06 crc kubenswrapper[4979]: I0130 23:19:06.216654 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-m8s2f"] Jan 30 23:19:06 crc kubenswrapper[4979]: W0130 23:19:06.224427 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ae9dc0_5b82_4990_878a_9570fc849c26.slice/crio-458ce5ea518e4145fe63ffbb30b199811086e65d5a1130a694e152f5798e27da WatchSource:0}: Error finding container 458ce5ea518e4145fe63ffbb30b199811086e65d5a1130a694e152f5798e27da: Status 404 returned error can't find the container with id 458ce5ea518e4145fe63ffbb30b199811086e65d5a1130a694e152f5798e27da Jan 30 23:19:06 crc kubenswrapper[4979]: I0130 23:19:06.415266 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-89w6g"] Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.081565 4979 generic.go:334] "Generic (PLEG): container finished" podID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerID="ccc43b745db314daf28ae463940cf548663352e7673aec67c6df25622cd0610d" exitCode=0 Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.081686 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4bcmq" event={"ID":"b39f85e7-5ff3-4843-87ca-0eaa482d5107","Type":"ContainerDied","Data":"ccc43b745db314daf28ae463940cf548663352e7673aec67c6df25622cd0610d"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.085099 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-pbxbw" event={"ID":"e7a38a33-332b-484f-a620-5ecc2b52d9d8","Type":"ContainerStarted","Data":"7622d5d9245be933f5ef098f0dba2f280e7a0c263e201fdb4aaa77a617d21abe"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.085502 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.087344 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-89w6g" event={"ID":"82154ec9-1201-41a2-a0f2-904b2db3c497","Type":"ContainerStarted","Data":"916e5eaacdcd1ede31ba8014d8722ea3d637e76447806973be824b09129f6af2"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.089563 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-p7ttv" event={"ID":"e59aa6da-4048-4cf0-add7-cb98472425cb","Type":"ContainerStarted","Data":"75046a3794d2dc69ec2ac4600f1a4a5d7fd9773b1a17f454be31058128a46988"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.089750 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.091272 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-m8s2f" event={"ID":"81ae9dc0-5b82-4990-878a-9570fc849c26","Type":"ContainerStarted","Data":"458ce5ea518e4145fe63ffbb30b199811086e65d5a1130a694e152f5798e27da"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.093095 4979 generic.go:334] "Generic (PLEG): container finished" podID="56781d53-1264-465c-bee8-378a284703f7" containerID="a4eb0f4089220dbf1416ca0ec0eb59b035c4c9519b93e961db07573db46163c3" exitCode=0 Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.093130 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" event={"ID":"56781d53-1264-465c-bee8-378a284703f7","Type":"ContainerDied","Data":"a4eb0f4089220dbf1416ca0ec0eb59b035c4c9519b93e961db07573db46163c3"} Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.131133 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-pbxbw" podStartSLOduration=12.131115654 podStartE2EDuration="12.131115654s" podCreationTimestamp="2026-01-30 23:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:19:07.120972899 +0000 UTC m=+5943.082219932" watchObservedRunningTime="2026-01-30 23:19:07.131115654 +0000 UTC m=+5943.092362687" Jan 30 23:19:07 crc kubenswrapper[4979]: I0130 23:19:07.198314 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-p7ttv" podStartSLOduration=3.490749266 podStartE2EDuration="17.198294842s" podCreationTimestamp="2026-01-30 23:18:50 +0000 UTC" firstStartedPulling="2026-01-30 23:18:51.589609663 +0000 UTC m=+5927.550856696" lastFinishedPulling="2026-01-30 23:19:05.297155219 +0000 UTC m=+5941.258402272" observedRunningTime="2026-01-30 23:19:07.157849217 +0000 UTC m=+5943.119096250" watchObservedRunningTime="2026-01-30 23:19:07.198294842 +0000 UTC m=+5943.159541875" Jan 30 23:19:09 crc kubenswrapper[4979]: I0130 23:19:09.123386 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-m8s2f" event={"ID":"81ae9dc0-5b82-4990-878a-9570fc849c26","Type":"ContainerStarted","Data":"e46fd2d98f839f589b43e1be48ab9473c15daf32d3f69ba46cfc002aa5be542f"} Jan 30 23:19:09 crc kubenswrapper[4979]: I0130 23:19:09.130172 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4bcmq" event={"ID":"b39f85e7-5ff3-4843-87ca-0eaa482d5107","Type":"ContainerStarted","Data":"ff6fff980ddd92a87a7ae04fbc5182179084120991da4ee3062729859c5caa91"} Jan 30 23:19:09 crc kubenswrapper[4979]: I0130 23:19:09.133384 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-89w6g" event={"ID":"82154ec9-1201-41a2-a0f2-904b2db3c497","Type":"ContainerStarted","Data":"f339c71d78f914d15e6d8b3b27812ef5de42c56afea6d3b5c863a4a8de8c97df"} Jan 30 23:19:09 crc kubenswrapper[4979]: I0130 23:19:09.178000 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-4bcmq" podStartSLOduration=13.177966311 podStartE2EDuration="13.177966311s" podCreationTimestamp="2026-01-30 23:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:19:09.177115608 +0000 UTC m=+5945.138362631" watchObservedRunningTime="2026-01-30 23:19:09.177966311 +0000 UTC m=+5945.139213334" Jan 30 23:19:10 crc kubenswrapper[4979]: E0130 23:19:10.606680 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82154ec9_1201_41a2_a0f2_904b2db3c497.slice/crio-conmon-f339c71d78f914d15e6d8b3b27812ef5de42c56afea6d3b5c863a4a8de8c97df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ae9dc0_5b82_4990_878a_9570fc849c26.slice/crio-e46fd2d98f839f589b43e1be48ab9473c15daf32d3f69ba46cfc002aa5be542f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ae9dc0_5b82_4990_878a_9570fc849c26.slice/crio-conmon-e46fd2d98f839f589b43e1be48ab9473c15daf32d3f69ba46cfc002aa5be542f.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.069748 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:19:11 crc kubenswrapper[4979]: E0130 23:19:11.070411 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.155984 4979 generic.go:334] "Generic (PLEG): container finished" podID="81ae9dc0-5b82-4990-878a-9570fc849c26" containerID="e46fd2d98f839f589b43e1be48ab9473c15daf32d3f69ba46cfc002aa5be542f" exitCode=0 Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.156083 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-m8s2f" event={"ID":"81ae9dc0-5b82-4990-878a-9570fc849c26","Type":"ContainerDied","Data":"e46fd2d98f839f589b43e1be48ab9473c15daf32d3f69ba46cfc002aa5be542f"} Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.157879 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" event={"ID":"56781d53-1264-465c-bee8-378a284703f7","Type":"ContainerStarted","Data":"319b5eede36b509c5c2ac5d2e3a9e083b0677c539eba4e923c9f797bcf243cf8"} Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.160076 4979 generic.go:334] "Generic (PLEG): container finished" podID="82154ec9-1201-41a2-a0f2-904b2db3c497" containerID="f339c71d78f914d15e6d8b3b27812ef5de42c56afea6d3b5c863a4a8de8c97df" exitCode=0 Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.160102 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-89w6g" event={"ID":"82154ec9-1201-41a2-a0f2-904b2db3c497","Type":"ContainerDied","Data":"f339c71d78f914d15e6d8b3b27812ef5de42c56afea6d3b5c863a4a8de8c97df"} Jan 30 23:19:11 crc kubenswrapper[4979]: I0130 23:19:11.255845 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" podStartSLOduration=2.243274556 podStartE2EDuration="20.254074071s" podCreationTimestamp="2026-01-30 23:18:51 +0000 UTC" firstStartedPulling="2026-01-30 23:18:51.987580465 +0000 UTC m=+5927.948827498" lastFinishedPulling="2026-01-30 23:19:09.99837998 +0000 UTC m=+5945.959627013" observedRunningTime="2026-01-30 23:19:11.213449661 +0000 UTC m=+5947.174696724" watchObservedRunningTime="2026-01-30 23:19:11.254074071 +0000 UTC m=+5947.215321104" Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.175864 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-m8s2f" event={"ID":"81ae9dc0-5b82-4990-878a-9570fc849c26","Type":"ContainerStarted","Data":"e6af351ae99b8b8f7dbae150cb9b5f8e2b383e77c428dd70b1380c48df8f3b87"} Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.177686 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-m8s2f" Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.180610 4979 generic.go:334] "Generic (PLEG): container finished" podID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerID="ff6fff980ddd92a87a7ae04fbc5182179084120991da4ee3062729859c5caa91" exitCode=0 Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.180696 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4bcmq" event={"ID":"b39f85e7-5ff3-4843-87ca-0eaa482d5107","Type":"ContainerDied","Data":"ff6fff980ddd92a87a7ae04fbc5182179084120991da4ee3062729859c5caa91"} Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.183859 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-89w6g" event={"ID":"82154ec9-1201-41a2-a0f2-904b2db3c497","Type":"ContainerStarted","Data":"06b2f47bbce1e28ded085c3e47b4ed48b0fa9a310cd07751bb3d2bfa55db20fe"} Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.184902 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.229139 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-89w6g" podStartSLOduration=13.643609945 podStartE2EDuration="15.229119704s" podCreationTimestamp="2026-01-30 23:18:57 +0000 UTC" firstStartedPulling="2026-01-30 23:19:06.437125418 +0000 UTC m=+5942.398372451" lastFinishedPulling="2026-01-30 23:19:08.022635177 +0000 UTC m=+5943.983882210" observedRunningTime="2026-01-30 23:19:12.221650432 +0000 UTC m=+5948.182897465" watchObservedRunningTime="2026-01-30 23:19:12.229119704 +0000 UTC m=+5948.190366747" Jan 30 23:19:12 crc kubenswrapper[4979]: I0130 23:19:12.230995 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-m8s2f" podStartSLOduration=12.442562362 podStartE2EDuration="14.230983254s" podCreationTimestamp="2026-01-30 23:18:58 +0000 UTC" firstStartedPulling="2026-01-30 23:19:06.230966267 +0000 UTC m=+5942.192213300" lastFinishedPulling="2026-01-30 23:19:08.019387169 +0000 UTC m=+5943.980634192" observedRunningTime="2026-01-30 23:19:12.204495187 +0000 UTC m=+5948.165742220" watchObservedRunningTime="2026-01-30 23:19:12.230983254 +0000 UTC m=+5948.192230287" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.589661 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.696167 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle\") pod \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.696421 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged\") pod \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.696630 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts\") pod \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.696685 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data\") pod \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\" (UID: \"b39f85e7-5ff3-4843-87ca-0eaa482d5107\") " Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.701495 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts" (OuterVolumeSpecName: "scripts") pod "b39f85e7-5ff3-4843-87ca-0eaa482d5107" (UID: "b39f85e7-5ff3-4843-87ca-0eaa482d5107"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.702139 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data" (OuterVolumeSpecName: "config-data") pod "b39f85e7-5ff3-4843-87ca-0eaa482d5107" (UID: "b39f85e7-5ff3-4843-87ca-0eaa482d5107"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.721447 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b39f85e7-5ff3-4843-87ca-0eaa482d5107" (UID: "b39f85e7-5ff3-4843-87ca-0eaa482d5107"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.727487 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "b39f85e7-5ff3-4843-87ca-0eaa482d5107" (UID: "b39f85e7-5ff3-4843-87ca-0eaa482d5107"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.799736 4979 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.799775 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.799785 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:13 crc kubenswrapper[4979]: I0130 23:19:13.799793 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39f85e7-5ff3-4843-87ca-0eaa482d5107-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:14 crc kubenswrapper[4979]: I0130 23:19:14.208355 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4bcmq" event={"ID":"b39f85e7-5ff3-4843-87ca-0eaa482d5107","Type":"ContainerDied","Data":"77ea6560b514db80edcd2b1a784559cc0b05150e8bf7ea65bca5ae3812975520"} Jan 30 23:19:14 crc kubenswrapper[4979]: I0130 23:19:14.208418 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4bcmq" Jan 30 23:19:14 crc kubenswrapper[4979]: I0130 23:19:14.208420 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77ea6560b514db80edcd2b1a784559cc0b05150e8bf7ea65bca5ae3812975520" Jan 30 23:19:20 crc kubenswrapper[4979]: I0130 23:19:20.991323 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-p7ttv" Jan 30 23:19:23 crc kubenswrapper[4979]: I0130 23:19:23.069593 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:19:23 crc kubenswrapper[4979]: E0130 23:19:23.070105 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.166141 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:25 crc kubenswrapper[4979]: E0130 23:19:25.167944 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerName="init" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.168243 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerName="init" Jan 30 23:19:25 crc kubenswrapper[4979]: E0130 23:19:25.168265 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerName="octavia-db-sync" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.168272 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerName="octavia-db-sync" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.168491 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" containerName="octavia-db-sync" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.169960 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.174902 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.339171 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fzsg\" (UniqueName: \"kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.339284 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.339311 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.441346 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.441410 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.441544 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fzsg\" (UniqueName: \"kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.441962 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.442057 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.460755 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fzsg\" (UniqueName: \"kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg\") pod \"certified-operators-2kp2z\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:25 crc kubenswrapper[4979]: I0130 23:19:25.501948 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:26 crc kubenswrapper[4979]: I0130 23:19:26.002852 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-pbxbw" Jan 30 23:19:26 crc kubenswrapper[4979]: I0130 23:19:26.045080 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:26 crc kubenswrapper[4979]: I0130 23:19:26.332943 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerID="9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8" exitCode=0 Jan 30 23:19:26 crc kubenswrapper[4979]: I0130 23:19:26.333273 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerDied","Data":"9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8"} Jan 30 23:19:26 crc kubenswrapper[4979]: I0130 23:19:26.333307 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerStarted","Data":"96ab71d86d8b333dc2defe343e0a32695ff14009ef2d6da9df54b0ddd55b9773"} Jan 30 23:19:27 crc kubenswrapper[4979]: I0130 23:19:27.345380 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerStarted","Data":"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3"} Jan 30 23:19:27 crc kubenswrapper[4979]: I0130 23:19:27.673835 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-89w6g" Jan 30 23:19:28 crc kubenswrapper[4979]: I0130 23:19:28.047149 4979 scope.go:117] "RemoveContainer" containerID="146a28aa66c76f36d0bb7b5d10b9ff7158b1cd544c809454096339f1b214adf4" Jan 30 23:19:28 crc kubenswrapper[4979]: I0130 23:19:28.355476 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerID="4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3" exitCode=0 Jan 30 23:19:28 crc kubenswrapper[4979]: I0130 23:19:28.355542 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerDied","Data":"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3"} Jan 30 23:19:28 crc kubenswrapper[4979]: I0130 23:19:28.414447 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-m8s2f" Jan 30 23:19:29 crc kubenswrapper[4979]: I0130 23:19:29.375498 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerStarted","Data":"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e"} Jan 30 23:19:29 crc kubenswrapper[4979]: I0130 23:19:29.405709 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2kp2z" podStartSLOduration=1.977104807 podStartE2EDuration="4.405684136s" podCreationTimestamp="2026-01-30 23:19:25 +0000 UTC" firstStartedPulling="2026-01-30 23:19:26.33520018 +0000 UTC m=+5962.296447213" lastFinishedPulling="2026-01-30 23:19:28.763779509 +0000 UTC m=+5964.725026542" observedRunningTime="2026-01-30 23:19:29.392749465 +0000 UTC m=+5965.353996498" watchObservedRunningTime="2026-01-30 23:19:29.405684136 +0000 UTC m=+5965.366931169" Jan 30 23:19:35 crc kubenswrapper[4979]: I0130 23:19:35.076411 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:19:35 crc kubenswrapper[4979]: E0130 23:19:35.077484 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:19:35 crc kubenswrapper[4979]: I0130 23:19:35.502944 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:35 crc kubenswrapper[4979]: I0130 23:19:35.503393 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:35 crc kubenswrapper[4979]: I0130 23:19:35.573142 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:36 crc kubenswrapper[4979]: I0130 23:19:36.493426 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:36 crc kubenswrapper[4979]: I0130 23:19:36.541703 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:38 crc kubenswrapper[4979]: I0130 23:19:38.460430 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2kp2z" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="registry-server" containerID="cri-o://790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e" gracePeriod=2 Jan 30 23:19:38 crc kubenswrapper[4979]: I0130 23:19:38.985498 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.023658 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fzsg\" (UniqueName: \"kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg\") pod \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.023793 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities\") pod \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.023880 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content\") pod \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\" (UID: \"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc\") " Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.029265 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg" (OuterVolumeSpecName: "kube-api-access-2fzsg") pod "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" (UID: "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc"). InnerVolumeSpecName "kube-api-access-2fzsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.033141 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities" (OuterVolumeSpecName: "utilities") pod "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" (UID: "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.070901 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" (UID: "5c5b25e4-137f-41a9-a7d5-ca3300cac0cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.126591 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.126635 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fzsg\" (UniqueName: \"kubernetes.io/projected/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-kube-api-access-2fzsg\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.126652 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.472101 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerID="790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e" exitCode=0 Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.472143 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerDied","Data":"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e"} Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.472171 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kp2z" event={"ID":"5c5b25e4-137f-41a9-a7d5-ca3300cac0cc","Type":"ContainerDied","Data":"96ab71d86d8b333dc2defe343e0a32695ff14009ef2d6da9df54b0ddd55b9773"} Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.472189 4979 scope.go:117] "RemoveContainer" containerID="790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.472184 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kp2z" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.496186 4979 scope.go:117] "RemoveContainer" containerID="4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.506060 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.517207 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2kp2z"] Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.520152 4979 scope.go:117] "RemoveContainer" containerID="9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.556853 4979 scope.go:117] "RemoveContainer" containerID="790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e" Jan 30 23:19:39 crc kubenswrapper[4979]: E0130 23:19:39.557320 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e\": container with ID starting with 790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e not found: ID does not exist" containerID="790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.557353 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e"} err="failed to get container status \"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e\": rpc error: code = NotFound desc = could not find container \"790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e\": container with ID starting with 790b9429d25903f557b86f0c06667ade7fcbd0af7561cc38673385e35cf8f25e not found: ID does not exist" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.557374 4979 scope.go:117] "RemoveContainer" containerID="4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3" Jan 30 23:19:39 crc kubenswrapper[4979]: E0130 23:19:39.557668 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3\": container with ID starting with 4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3 not found: ID does not exist" containerID="4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.557733 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3"} err="failed to get container status \"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3\": rpc error: code = NotFound desc = could not find container \"4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3\": container with ID starting with 4b5f9ab41e522a0c9af4692c3193f2fa87b563f5b4233e1ffc99e5dddf0975b3 not found: ID does not exist" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.557778 4979 scope.go:117] "RemoveContainer" containerID="9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8" Jan 30 23:19:39 crc kubenswrapper[4979]: E0130 23:19:39.558238 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8\": container with ID starting with 9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8 not found: ID does not exist" containerID="9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8" Jan 30 23:19:39 crc kubenswrapper[4979]: I0130 23:19:39.558288 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8"} err="failed to get container status \"9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8\": rpc error: code = NotFound desc = could not find container \"9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8\": container with ID starting with 9d80b7b6e2141d17239152d8de61222cbc62a442cdf5a9bb103be1af79b759d8 not found: ID does not exist" Jan 30 23:19:41 crc kubenswrapper[4979]: I0130 23:19:41.081644 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" path="/var/lib/kubelet/pods/5c5b25e4-137f-41a9-a7d5-ca3300cac0cc/volumes" Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.244341 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.245270 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="octavia-amphora-httpd" containerID="cri-o://319b5eede36b509c5c2ac5d2e3a9e083b0677c539eba4e923c9f797bcf243cf8" gracePeriod=30 Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.584962 4979 generic.go:334] "Generic (PLEG): container finished" podID="56781d53-1264-465c-bee8-378a284703f7" containerID="319b5eede36b509c5c2ac5d2e3a9e083b0677c539eba4e923c9f797bcf243cf8" exitCode=0 Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.585244 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" event={"ID":"56781d53-1264-465c-bee8-378a284703f7","Type":"ContainerDied","Data":"319b5eede36b509c5c2ac5d2e3a9e083b0677c539eba4e923c9f797bcf243cf8"} Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.868082 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.932260 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image\") pod \"56781d53-1264-465c-bee8-378a284703f7\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.932937 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config\") pod \"56781d53-1264-465c-bee8-378a284703f7\" (UID: \"56781d53-1264-465c-bee8-378a284703f7\") " Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.961731 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "56781d53-1264-465c-bee8-378a284703f7" (UID: "56781d53-1264-465c-bee8-378a284703f7"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:19:44 crc kubenswrapper[4979]: I0130 23:19:44.963610 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "56781d53-1264-465c-bee8-378a284703f7" (UID: "56781d53-1264-465c-bee8-378a284703f7"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.035364 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56781d53-1264-465c-bee8-378a284703f7-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.035412 4979 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/56781d53-1264-465c-bee8-378a284703f7-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.601753 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" event={"ID":"56781d53-1264-465c-bee8-378a284703f7","Type":"ContainerDied","Data":"c48864e7f767bd9ad4a6acb488259616c35f707f9732bb49d0ec4dd7d49fb517"} Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.601827 4979 scope.go:117] "RemoveContainer" containerID="319b5eede36b509c5c2ac5d2e3a9e083b0677c539eba4e923c9f797bcf243cf8" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.601906 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8q9t8" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.643280 4979 scope.go:117] "RemoveContainer" containerID="a4eb0f4089220dbf1416ca0ec0eb59b035c4c9519b93e961db07573db46163c3" Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.651097 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:19:45 crc kubenswrapper[4979]: I0130 23:19:45.662620 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8q9t8"] Jan 30 23:19:47 crc kubenswrapper[4979]: I0130 23:19:47.083195 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56781d53-1264-465c-bee8-378a284703f7" path="/var/lib/kubelet/pods/56781d53-1264-465c-bee8-378a284703f7/volumes" Jan 30 23:19:50 crc kubenswrapper[4979]: I0130 23:19:50.070545 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:19:50 crc kubenswrapper[4979]: E0130 23:19:50.071664 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:20:03 crc kubenswrapper[4979]: I0130 23:20:03.077429 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:20:03 crc kubenswrapper[4979]: I0130 23:20:03.815794 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83"} Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.028630 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:10 crc kubenswrapper[4979]: E0130 23:20:10.029495 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="octavia-amphora-httpd" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029508 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="octavia-amphora-httpd" Jan 30 23:20:10 crc kubenswrapper[4979]: E0130 23:20:10.029524 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="registry-server" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029530 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="registry-server" Jan 30 23:20:10 crc kubenswrapper[4979]: E0130 23:20:10.029539 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="extract-content" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029545 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="extract-content" Jan 30 23:20:10 crc kubenswrapper[4979]: E0130 23:20:10.029562 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="init" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029568 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="init" Jan 30 23:20:10 crc kubenswrapper[4979]: E0130 23:20:10.029586 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="extract-utilities" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029591 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="extract-utilities" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029764 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="56781d53-1264-465c-bee8-378a284703f7" containerName="octavia-amphora-httpd" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.029775 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5b25e4-137f-41a9-a7d5-ca3300cac0cc" containerName="registry-server" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.031095 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.040584 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.093243 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vk85\" (UniqueName: \"kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.093598 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.093725 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.195669 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vk85\" (UniqueName: \"kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.195739 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.195758 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.196239 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.197013 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.228841 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vk85\" (UniqueName: \"kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85\") pod \"redhat-operators-xxqqz\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.349841 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.825147 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:10 crc kubenswrapper[4979]: I0130 23:20:10.883583 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerStarted","Data":"7c9bee82bf003525e7c54bb6adac62d1b43208facb084a3db5f4b3d939ef079f"} Jan 30 23:20:11 crc kubenswrapper[4979]: I0130 23:20:11.893709 4979 generic.go:334] "Generic (PLEG): container finished" podID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerID="85e4313a2cc68a06cc0e1950b147f85c65c4690869acc61ef113f874d32f80b3" exitCode=0 Jan 30 23:20:11 crc kubenswrapper[4979]: I0130 23:20:11.894018 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerDied","Data":"85e4313a2cc68a06cc0e1950b147f85c65c4690869acc61ef113f874d32f80b3"} Jan 30 23:20:13 crc kubenswrapper[4979]: I0130 23:20:13.914400 4979 generic.go:334] "Generic (PLEG): container finished" podID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerID="f5bc3f510c6052f923f11253c496940f01168e62bba5a86742401d3a8f876b89" exitCode=0 Jan 30 23:20:13 crc kubenswrapper[4979]: I0130 23:20:13.914481 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerDied","Data":"f5bc3f510c6052f923f11253c496940f01168e62bba5a86742401d3a8f876b89"} Jan 30 23:20:14 crc kubenswrapper[4979]: I0130 23:20:14.926295 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerStarted","Data":"96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5"} Jan 30 23:20:14 crc kubenswrapper[4979]: I0130 23:20:14.947185 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xxqqz" podStartSLOduration=2.447664273 podStartE2EDuration="4.947165293s" podCreationTimestamp="2026-01-30 23:20:10 +0000 UTC" firstStartedPulling="2026-01-30 23:20:11.896078461 +0000 UTC m=+6007.857325494" lastFinishedPulling="2026-01-30 23:20:14.395579481 +0000 UTC m=+6010.356826514" observedRunningTime="2026-01-30 23:20:14.941321655 +0000 UTC m=+6010.902568688" watchObservedRunningTime="2026-01-30 23:20:14.947165293 +0000 UTC m=+6010.908412326" Jan 30 23:20:20 crc kubenswrapper[4979]: I0130 23:20:20.350822 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:20 crc kubenswrapper[4979]: I0130 23:20:20.352177 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:21 crc kubenswrapper[4979]: I0130 23:20:21.424243 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xxqqz" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="registry-server" probeResult="failure" output=< Jan 30 23:20:21 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 23:20:21 crc kubenswrapper[4979]: > Jan 30 23:20:23 crc kubenswrapper[4979]: E0130 23:20:23.306230 4979 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.143:40872->38.102.83.143:38353: write tcp 38.102.83.143:40872->38.102.83.143:38353: write: broken pipe Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.683694 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.685928 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.690433 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.690516 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.690863 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-zdcgh" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.690999 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.709314 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.753906 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.754196 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-log" containerID="cri-o://82374da9e2dd47fd345e23e5da5677353e686a5db8eb072ab0ddef15a716a6f6" gracePeriod=30 Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.754331 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-httpd" containerID="cri-o://55209b1e5d91700ba07d75b2118b6f30952e7e514892e758c421dc19261f2065" gracePeriod=30 Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.767473 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.767535 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bxbz\" (UniqueName: \"kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.767564 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.767600 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.767662 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.796812 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.798723 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.860956 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.869795 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871632 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871691 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bxbz\" (UniqueName: \"kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871714 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871778 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdt2n\" (UniqueName: \"kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871814 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871850 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871874 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871934 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.871962 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.872905 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.872997 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.873774 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.878632 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.878963 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-log" containerID="cri-o://b5896731cfc487f3a385246f60e958d9a149ef54dc521649352a734a8e89d545" gracePeriod=30 Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.879392 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-httpd" containerID="cri-o://4b0ad042bd3a134b298fe28cc075f762b3f855ad7fe9585ff50024a2eb368232" gracePeriod=30 Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.880791 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.902183 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bxbz\" (UniqueName: \"kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz\") pod \"horizon-7fcc7dc57-tn5qb\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.973663 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.974119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdt2n\" (UniqueName: \"kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.974143 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.974181 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.974235 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.975144 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.976364 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.976501 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.979821 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:27 crc kubenswrapper[4979]: I0130 23:20:27.993842 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdt2n\" (UniqueName: \"kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n\") pod \"horizon-c6577687c-vbmgt\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.020181 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.119170 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.131390 4979 generic.go:334] "Generic (PLEG): container finished" podID="2eceabd7-12d5-42b8-9add-f89801459249" containerID="82374da9e2dd47fd345e23e5da5677353e686a5db8eb072ab0ddef15a716a6f6" exitCode=143 Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.131477 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerDied","Data":"82374da9e2dd47fd345e23e5da5677353e686a5db8eb072ab0ddef15a716a6f6"} Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.137331 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerID="b5896731cfc487f3a385246f60e958d9a149ef54dc521649352a734a8e89d545" exitCode=143 Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.137378 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerDied","Data":"b5896731cfc487f3a385246f60e958d9a149ef54dc521649352a734a8e89d545"} Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.549858 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.586682 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.614619 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.617335 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.629869 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:20:28 crc kubenswrapper[4979]: W0130 23:20:28.685566 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97022094_b924_4af4_9725_f91da4c8c957.slice/crio-607207eba03fa8c4a7bd9dafe27a8d6eca3bcb615fa115aa828cf7f4ce14ddae WatchSource:0}: Error finding container 607207eba03fa8c4a7bd9dafe27a8d6eca3bcb615fa115aa828cf7f4ce14ddae: Status 404 returned error can't find the container with id 607207eba03fa8c4a7bd9dafe27a8d6eca3bcb615fa115aa828cf7f4ce14ddae Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.690635 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.696646 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.696717 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.696760 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.696781 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.696832 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g48z6\" (UniqueName: \"kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.798597 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g48z6\" (UniqueName: \"kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.798765 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.798813 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.798866 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.798897 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.799546 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.799857 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.800491 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.805422 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.814256 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g48z6\" (UniqueName: \"kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6\") pod \"horizon-66977458c7-msp58\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:28 crc kubenswrapper[4979]: I0130 23:20:28.950853 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:29 crc kubenswrapper[4979]: I0130 23:20:29.168313 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerStarted","Data":"17d83bd27fd3e90bc65a85060180708bdecb525bf29dd691bf268c805ec8d75c"} Jan 30 23:20:29 crc kubenswrapper[4979]: I0130 23:20:29.189652 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerStarted","Data":"607207eba03fa8c4a7bd9dafe27a8d6eca3bcb615fa115aa828cf7f4ce14ddae"} Jan 30 23:20:29 crc kubenswrapper[4979]: I0130 23:20:29.414776 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:20:29 crc kubenswrapper[4979]: W0130 23:20:29.433405 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9525bd9a_233e_4207_ac68_26491c2debf7.slice/crio-a3118c5f9754fea70be47aaa37b81d63a2dc940a51c7c4b8fc9c9e7b5a9b37c4 WatchSource:0}: Error finding container a3118c5f9754fea70be47aaa37b81d63a2dc940a51c7c4b8fc9c9e7b5a9b37c4: Status 404 returned error can't find the container with id a3118c5f9754fea70be47aaa37b81d63a2dc940a51c7c4b8fc9c9e7b5a9b37c4 Jan 30 23:20:30 crc kubenswrapper[4979]: I0130 23:20:30.199370 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerStarted","Data":"a3118c5f9754fea70be47aaa37b81d63a2dc940a51c7c4b8fc9c9e7b5a9b37c4"} Jan 30 23:20:30 crc kubenswrapper[4979]: I0130 23:20:30.411081 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:30 crc kubenswrapper[4979]: I0130 23:20:30.459418 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:30 crc kubenswrapper[4979]: I0130 23:20:30.650900 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:31 crc kubenswrapper[4979]: I0130 23:20:31.243342 4979 generic.go:334] "Generic (PLEG): container finished" podID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerID="4b0ad042bd3a134b298fe28cc075f762b3f855ad7fe9585ff50024a2eb368232" exitCode=0 Jan 30 23:20:31 crc kubenswrapper[4979]: I0130 23:20:31.243426 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerDied","Data":"4b0ad042bd3a134b298fe28cc075f762b3f855ad7fe9585ff50024a2eb368232"} Jan 30 23:20:31 crc kubenswrapper[4979]: I0130 23:20:31.246567 4979 generic.go:334] "Generic (PLEG): container finished" podID="2eceabd7-12d5-42b8-9add-f89801459249" containerID="55209b1e5d91700ba07d75b2118b6f30952e7e514892e758c421dc19261f2065" exitCode=0 Jan 30 23:20:31 crc kubenswrapper[4979]: I0130 23:20:31.247324 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerDied","Data":"55209b1e5d91700ba07d75b2118b6f30952e7e514892e758c421dc19261f2065"} Jan 30 23:20:32 crc kubenswrapper[4979]: I0130 23:20:32.253746 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xxqqz" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="registry-server" containerID="cri-o://96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5" gracePeriod=2 Jan 30 23:20:32 crc kubenswrapper[4979]: E0130 23:20:32.525326 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ff2fbcd_7776_466a_b4fc_9ffefbb5fc83.slice/crio-conmon-96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:20:33 crc kubenswrapper[4979]: I0130 23:20:33.266875 4979 generic.go:334] "Generic (PLEG): container finished" podID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerID="96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5" exitCode=0 Jan 30 23:20:33 crc kubenswrapper[4979]: I0130 23:20:33.267048 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerDied","Data":"96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5"} Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.522402 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.712136 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713393 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713480 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713635 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzpd5\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713685 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713759 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.713860 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph\") pod \"2eceabd7-12d5-42b8-9add-f89801459249\" (UID: \"2eceabd7-12d5-42b8-9add-f89801459249\") " Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.714570 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs" (OuterVolumeSpecName: "logs") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.714966 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.716117 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.716135 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eceabd7-12d5-42b8-9add-f89801459249-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.720357 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5" (OuterVolumeSpecName: "kube-api-access-zzpd5") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "kube-api-access-zzpd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.720601 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts" (OuterVolumeSpecName: "scripts") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.755419 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph" (OuterVolumeSpecName: "ceph") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.786777 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.817871 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.817905 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzpd5\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-kube-api-access-zzpd5\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.817915 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.817923 4979 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2eceabd7-12d5-42b8-9add-f89801459249-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.848334 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.885190 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data" (OuterVolumeSpecName: "config-data") pod "2eceabd7-12d5-42b8-9add-f89801459249" (UID: "2eceabd7-12d5-42b8-9add-f89801459249"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.919832 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eceabd7-12d5-42b8-9add-f89801459249-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:36 crc kubenswrapper[4979]: I0130 23:20:36.978428 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.020898 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vk85\" (UniqueName: \"kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85\") pod \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.021030 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities\") pod \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.021218 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content\") pod \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\" (UID: \"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.023158 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities" (OuterVolumeSpecName: "utilities") pod "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" (UID: "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.029170 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85" (OuterVolumeSpecName: "kube-api-access-6vk85") pod "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" (UID: "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83"). InnerVolumeSpecName "kube-api-access-6vk85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.122842 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.122948 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123014 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg4wj\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123258 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123304 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123339 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123428 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs\") pod \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\" (UID: \"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a\") " Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123862 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vk85\" (UniqueName: \"kubernetes.io/projected/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-kube-api-access-6vk85\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.123880 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.124529 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs" (OuterVolumeSpecName: "logs") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.124614 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.127652 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj" (OuterVolumeSpecName: "kube-api-access-hg4wj") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "kube-api-access-hg4wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.134578 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph" (OuterVolumeSpecName: "ceph") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.142449 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts" (OuterVolumeSpecName: "scripts") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.173119 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" (UID: "1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.219498 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225878 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225908 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225919 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg4wj\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-kube-api-access-hg4wj\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225930 4979 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225938 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225946 4979 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.225954 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.244120 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data" (OuterVolumeSpecName: "config-data") pod "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" (UID: "5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.308015 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerStarted","Data":"aabf2fa7c84990e41e200a9091e9e1684120f5c07d0330702a47b20463fc7a2e"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.308279 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerStarted","Data":"1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.310086 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a","Type":"ContainerDied","Data":"028416f98f1589f42948c16d708a30eb83a748a9cb41317ffc40f3e850e93529"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.310135 4979 scope.go:117] "RemoveContainer" containerID="4b0ad042bd3a134b298fe28cc075f762b3f855ad7fe9585ff50024a2eb368232" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.310095 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.312799 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqqz" event={"ID":"1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83","Type":"ContainerDied","Data":"7c9bee82bf003525e7c54bb6adac62d1b43208facb084a3db5f4b3d939ef079f"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.312871 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqqz" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.318589 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerStarted","Data":"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.318663 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerStarted","Data":"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.320698 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.320691 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2eceabd7-12d5-42b8-9add-f89801459249","Type":"ContainerDied","Data":"2b4011d528c5f4fb422a09023b408acf162ec5c71b0df8f8e4325d433023d56a"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.323056 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerStarted","Data":"24eed203165d6bac9c25b9f69e0b123b8b8d84ceb8e1f54a8d0f6f185fc264c0"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.323093 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerStarted","Data":"c47efe70e6ca24494b95717f74004d682a3643adc04c11e6a9f67a0c466b4af0"} Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.323220 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7fcc7dc57-tn5qb" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon-log" containerID="cri-o://c47efe70e6ca24494b95717f74004d682a3643adc04c11e6a9f67a0c466b4af0" gracePeriod=30 Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.323471 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7fcc7dc57-tn5qb" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon" containerID="cri-o://24eed203165d6bac9c25b9f69e0b123b8b8d84ceb8e1f54a8d0f6f185fc264c0" gracePeriod=30 Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.328164 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.341343 4979 scope.go:117] "RemoveContainer" containerID="b5896731cfc487f3a385246f60e958d9a149ef54dc521649352a734a8e89d545" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.350809 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c6577687c-vbmgt" podStartSLOduration=2.502330244 podStartE2EDuration="10.350788888s" podCreationTimestamp="2026-01-30 23:20:27 +0000 UTC" firstStartedPulling="2026-01-30 23:20:28.687746024 +0000 UTC m=+6024.648993057" lastFinishedPulling="2026-01-30 23:20:36.536204678 +0000 UTC m=+6032.497451701" observedRunningTime="2026-01-30 23:20:37.342183795 +0000 UTC m=+6033.303430838" watchObservedRunningTime="2026-01-30 23:20:37.350788888 +0000 UTC m=+6033.312035911" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.362629 4979 scope.go:117] "RemoveContainer" containerID="96ce80784d1342110273c46f4098af90b1959a4a128a174f3781d17883d2aae5" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.379665 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66977458c7-msp58" podStartSLOduration=2.280605363 podStartE2EDuration="9.37963943s" podCreationTimestamp="2026-01-30 23:20:28 +0000 UTC" firstStartedPulling="2026-01-30 23:20:29.437384726 +0000 UTC m=+6025.398631759" lastFinishedPulling="2026-01-30 23:20:36.536418773 +0000 UTC m=+6032.497665826" observedRunningTime="2026-01-30 23:20:37.363227935 +0000 UTC m=+6033.324474968" watchObservedRunningTime="2026-01-30 23:20:37.37963943 +0000 UTC m=+6033.340886463" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.390047 4979 scope.go:117] "RemoveContainer" containerID="f5bc3f510c6052f923f11253c496940f01168e62bba5a86742401d3a8f876b89" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.392911 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.418258 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xxqqz"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.420553 4979 scope.go:117] "RemoveContainer" containerID="85e4313a2cc68a06cc0e1950b147f85c65c4690869acc61ef113f874d32f80b3" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.436346 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.455898 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.484217 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.500695 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.501958 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7fcc7dc57-tn5qb" podStartSLOduration=2.494362629 podStartE2EDuration="10.50193551s" podCreationTimestamp="2026-01-30 23:20:27 +0000 UTC" firstStartedPulling="2026-01-30 23:20:28.58673906 +0000 UTC m=+6024.547986093" lastFinishedPulling="2026-01-30 23:20:36.594311941 +0000 UTC m=+6032.555558974" observedRunningTime="2026-01-30 23:20:37.425412838 +0000 UTC m=+6033.386659871" watchObservedRunningTime="2026-01-30 23:20:37.50193551 +0000 UTC m=+6033.463182543" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.511597 4979 scope.go:117] "RemoveContainer" containerID="55209b1e5d91700ba07d75b2118b6f30952e7e514892e758c421dc19261f2065" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.527611 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528067 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="registry-server" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528081 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="registry-server" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528097 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528104 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528123 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="extract-content" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528129 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="extract-content" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528144 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528149 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528164 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="extract-utilities" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528170 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="extract-utilities" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528182 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528189 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: E0130 23:20:37.528206 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528211 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528385 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528395 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eceabd7-12d5-42b8-9add-f89801459249" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528418 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-httpd" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528428 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" containerName="registry-server" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.528440 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" containerName="glance-log" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.529600 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.535564 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.535734 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.535891 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-92r4q" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.536210 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.538246 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.539938 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.544797 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.553943 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.574258 4979 scope.go:117] "RemoveContainer" containerID="82374da9e2dd47fd345e23e5da5677353e686a5db8eb072ab0ddef15a716a6f6" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633610 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-logs\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633657 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-config-data\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633689 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633725 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prb2q\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-kube-api-access-prb2q\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633754 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633791 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633890 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633917 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633951 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.633967 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.634255 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czffz\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-kube-api-access-czffz\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.634323 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-ceph\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.634341 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-scripts\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.634392 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.736818 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.736891 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.736954 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.736989 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737034 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737165 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737225 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czffz\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-kube-api-access-czffz\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737245 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-ceph\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737262 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-scripts\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737278 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737298 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-logs\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737304 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737315 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-config-data\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737340 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737422 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737513 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prb2q\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-kube-api-access-prb2q\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.737604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.738167 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81730-0360-4ee7-a657-774bab3e5ce1-logs\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.742185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.743635 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.743929 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.745302 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-ceph\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.747828 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-scripts\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.750013 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.753793 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.755861 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81730-0360-4ee7-a657-774bab3e5ce1-config-data\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.757464 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czffz\" (UniqueName: \"kubernetes.io/projected/0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a-kube-api-access-czffz\") pod \"glance-default-internal-api-0\" (UID: \"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a\") " pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.758319 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prb2q\" (UniqueName: \"kubernetes.io/projected/67c81730-0360-4ee7-a657-774bab3e5ce1-kube-api-access-prb2q\") pod \"glance-default-external-api-0\" (UID: \"67c81730-0360-4ee7-a657-774bab3e5ce1\") " pod="openstack/glance-default-external-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.865125 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:37 crc kubenswrapper[4979]: I0130 23:20:37.879337 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.021906 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.121074 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.121123 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.516174 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.603149 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.953336 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:38 crc kubenswrapper[4979]: I0130 23:20:38.953394 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:20:39 crc kubenswrapper[4979]: I0130 23:20:39.090087 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83" path="/var/lib/kubelet/pods/1ff2fbcd-7776-466a-b4fc-9ffefbb5fc83/volumes" Jan 30 23:20:39 crc kubenswrapper[4979]: I0130 23:20:39.091133 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eceabd7-12d5-42b8-9add-f89801459249" path="/var/lib/kubelet/pods/2eceabd7-12d5-42b8-9add-f89801459249/volumes" Jan 30 23:20:39 crc kubenswrapper[4979]: I0130 23:20:39.091772 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a" path="/var/lib/kubelet/pods/5c75f95c-8b95-4cd7-9c3c-0ee1ff15cf7a/volumes" Jan 30 23:20:39 crc kubenswrapper[4979]: I0130 23:20:39.351643 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67c81730-0360-4ee7-a657-774bab3e5ce1","Type":"ContainerStarted","Data":"4bf4a91a7c05a8cba94f918f727a124c8ff9328e1632948df22dbd481e5db031"} Jan 30 23:20:39 crc kubenswrapper[4979]: I0130 23:20:39.352962 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a","Type":"ContainerStarted","Data":"817c3a28a213997b0532f8b96aa7caffd9f1684b511c7c605774368c55b9160f"} Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.364995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67c81730-0360-4ee7-a657-774bab3e5ce1","Type":"ContainerStarted","Data":"a109a6f4211a28840305607d7f68018d9fbd8da26ce2ec45158c294ff9280342"} Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.366665 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67c81730-0360-4ee7-a657-774bab3e5ce1","Type":"ContainerStarted","Data":"7ffab0db6c226419028deb4e521d3b1dc32f71dcc4771ad0a23a0221dcec5607"} Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.371972 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a","Type":"ContainerStarted","Data":"868aaaac2b29e60ec894a1a8d01e6f4d984a8b589ebcbfff89491e83274e523a"} Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.372030 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a","Type":"ContainerStarted","Data":"d0bcf15098c54793173bc634d1d439bc3075d0c0b45bdf3cfd75b3e26f7400a3"} Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.398163 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.398134648 podStartE2EDuration="3.398134648s" podCreationTimestamp="2026-01-30 23:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:20:40.389236587 +0000 UTC m=+6036.350483620" watchObservedRunningTime="2026-01-30 23:20:40.398134648 +0000 UTC m=+6036.359381691" Jan 30 23:20:40 crc kubenswrapper[4979]: I0130 23:20:40.431433 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.431407749 podStartE2EDuration="3.431407749s" podCreationTimestamp="2026-01-30 23:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:20:40.425460538 +0000 UTC m=+6036.386707581" watchObservedRunningTime="2026-01-30 23:20:40.431407749 +0000 UTC m=+6036.392654802" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.060275 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mdk2v"] Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.079424 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-088a-account-create-update-gl7pk"] Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.085578 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-088a-account-create-update-gl7pk"] Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.094748 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mdk2v"] Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.866215 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.866263 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.881101 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.882800 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.920988 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.930978 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.937895 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:47 crc kubenswrapper[4979]: I0130 23:20:47.944131 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.123629 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.462434 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.462481 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.462495 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.462508 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 23:20:48 crc kubenswrapper[4979]: I0130 23:20:48.954257 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 23:20:49 crc kubenswrapper[4979]: I0130 23:20:49.080829 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800775b4-f78f-4f2f-9d21-4dd42458db2b" path="/var/lib/kubelet/pods/800775b4-f78f-4f2f-9d21-4dd42458db2b/volumes" Jan 30 23:20:49 crc kubenswrapper[4979]: I0130 23:20:49.081651 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5bf2d6f-952e-4cec-938b-e1d00042c3ad" path="/var/lib/kubelet/pods/c5bf2d6f-952e-4cec-938b-e1d00042c3ad/volumes" Jan 30 23:20:50 crc kubenswrapper[4979]: I0130 23:20:50.421327 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 23:20:50 crc kubenswrapper[4979]: I0130 23:20:50.423252 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 23:20:50 crc kubenswrapper[4979]: I0130 23:20:50.431635 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:50 crc kubenswrapper[4979]: I0130 23:20:50.476631 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 23:20:53 crc kubenswrapper[4979]: I0130 23:20:53.060099 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qpzjk"] Jan 30 23:20:53 crc kubenswrapper[4979]: I0130 23:20:53.103343 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qpzjk"] Jan 30 23:20:55 crc kubenswrapper[4979]: I0130 23:20:55.084438 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="338244cb-adb6-4402-ba74-378f70078ebd" path="/var/lib/kubelet/pods/338244cb-adb6-4402-ba74-378f70078ebd/volumes" Jan 30 23:20:59 crc kubenswrapper[4979]: I0130 23:20:59.754312 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:21:00 crc kubenswrapper[4979]: I0130 23:21:00.639695 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:21:01 crc kubenswrapper[4979]: I0130 23:21:01.392337 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:21:02 crc kubenswrapper[4979]: I0130 23:21:02.478602 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:21:02 crc kubenswrapper[4979]: I0130 23:21:02.551740 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:21:02 crc kubenswrapper[4979]: I0130 23:21:02.552279 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon-log" containerID="cri-o://1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819" gracePeriod=30 Jan 30 23:21:02 crc kubenswrapper[4979]: I0130 23:21:02.552432 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" containerID="cri-o://aabf2fa7c84990e41e200a9091e9e1684120f5c07d0330702a47b20463fc7a2e" gracePeriod=30 Jan 30 23:21:06 crc kubenswrapper[4979]: I0130 23:21:06.649397 4979 generic.go:334] "Generic (PLEG): container finished" podID="97022094-b924-4af4-9725-f91da4c8c957" containerID="aabf2fa7c84990e41e200a9091e9e1684120f5c07d0330702a47b20463fc7a2e" exitCode=0 Jan 30 23:21:06 crc kubenswrapper[4979]: I0130 23:21:06.649555 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerDied","Data":"aabf2fa7c84990e41e200a9091e9e1684120f5c07d0330702a47b20463fc7a2e"} Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.669722 4979 generic.go:334] "Generic (PLEG): container finished" podID="9d2ab497-2486-4439-ae5b-c2284f870680" containerID="24eed203165d6bac9c25b9f69e0b123b8b8d84ceb8e1f54a8d0f6f185fc264c0" exitCode=137 Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.669762 4979 generic.go:334] "Generic (PLEG): container finished" podID="9d2ab497-2486-4439-ae5b-c2284f870680" containerID="c47efe70e6ca24494b95717f74004d682a3643adc04c11e6a9f67a0c466b4af0" exitCode=137 Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.669787 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerDied","Data":"24eed203165d6bac9c25b9f69e0b123b8b8d84ceb8e1f54a8d0f6f185fc264c0"} Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.669820 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerDied","Data":"c47efe70e6ca24494b95717f74004d682a3643adc04c11e6a9f67a0c466b4af0"} Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.825948 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.880776 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data\") pod \"9d2ab497-2486-4439-ae5b-c2284f870680\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.881251 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs\") pod \"9d2ab497-2486-4439-ae5b-c2284f870680\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.881369 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts\") pod \"9d2ab497-2486-4439-ae5b-c2284f870680\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.881515 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bxbz\" (UniqueName: \"kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz\") pod \"9d2ab497-2486-4439-ae5b-c2284f870680\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.881638 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key\") pod \"9d2ab497-2486-4439-ae5b-c2284f870680\" (UID: \"9d2ab497-2486-4439-ae5b-c2284f870680\") " Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.881865 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs" (OuterVolumeSpecName: "logs") pod "9d2ab497-2486-4439-ae5b-c2284f870680" (UID: "9d2ab497-2486-4439-ae5b-c2284f870680"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.882446 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d2ab497-2486-4439-ae5b-c2284f870680-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.890879 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9d2ab497-2486-4439-ae5b-c2284f870680" (UID: "9d2ab497-2486-4439-ae5b-c2284f870680"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.894284 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz" (OuterVolumeSpecName: "kube-api-access-2bxbz") pod "9d2ab497-2486-4439-ae5b-c2284f870680" (UID: "9d2ab497-2486-4439-ae5b-c2284f870680"). InnerVolumeSpecName "kube-api-access-2bxbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.911719 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data" (OuterVolumeSpecName: "config-data") pod "9d2ab497-2486-4439-ae5b-c2284f870680" (UID: "9d2ab497-2486-4439-ae5b-c2284f870680"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.915604 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts" (OuterVolumeSpecName: "scripts") pod "9d2ab497-2486-4439-ae5b-c2284f870680" (UID: "9d2ab497-2486-4439-ae5b-c2284f870680"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.984776 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bxbz\" (UniqueName: \"kubernetes.io/projected/9d2ab497-2486-4439-ae5b-c2284f870680-kube-api-access-2bxbz\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.984813 4979 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d2ab497-2486-4439-ae5b-c2284f870680-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.984824 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:07 crc kubenswrapper[4979]: I0130 23:21:07.984832 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2ab497-2486-4439-ae5b-c2284f870680-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.121550 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.686197 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fcc7dc57-tn5qb" event={"ID":"9d2ab497-2486-4439-ae5b-c2284f870680","Type":"ContainerDied","Data":"17d83bd27fd3e90bc65a85060180708bdecb525bf29dd691bf268c805ec8d75c"} Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.686265 4979 scope.go:117] "RemoveContainer" containerID="24eed203165d6bac9c25b9f69e0b123b8b8d84ceb8e1f54a8d0f6f185fc264c0" Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.686438 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fcc7dc57-tn5qb" Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.739419 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.757158 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7fcc7dc57-tn5qb"] Jan 30 23:21:08 crc kubenswrapper[4979]: I0130 23:21:08.910881 4979 scope.go:117] "RemoveContainer" containerID="c47efe70e6ca24494b95717f74004d682a3643adc04c11e6a9f67a0c466b4af0" Jan 30 23:21:09 crc kubenswrapper[4979]: I0130 23:21:09.083617 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" path="/var/lib/kubelet/pods/9d2ab497-2486-4439-ae5b-c2284f870680/volumes" Jan 30 23:21:18 crc kubenswrapper[4979]: I0130 23:21:18.120834 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.032230 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2ltc5"] Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.045327 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5575-account-create-update-hrq7w"] Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.055528 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5575-account-create-update-hrq7w"] Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.065603 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2ltc5"] Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.081489 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b871a72e-a648-4c40-b5eb-604c75307e21" path="/var/lib/kubelet/pods/b871a72e-a648-4c40-b5eb-604c75307e21/volumes" Jan 30 23:21:21 crc kubenswrapper[4979]: I0130 23:21:21.082464 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92c4a95-be2f-4c0d-a789-f7505dcdfd97" path="/var/lib/kubelet/pods/b92c4a95-be2f-4c0d-a789-f7505dcdfd97/volumes" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.121131 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c6577687c-vbmgt" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.122798 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.419654 4979 scope.go:117] "RemoveContainer" containerID="679b3058f3f485f291f252eaf3bb8918f69b6e3a441b2d5608224e19d4b90456" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.456503 4979 scope.go:117] "RemoveContainer" containerID="7bb8706f925ad6381a16e97222bca04fa77399d32e5cbac62e5ced735b44c617" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.498747 4979 scope.go:117] "RemoveContainer" containerID="46d964a0839cd8efea2510cfac9bc323533200f0741e6142ba6a532c576e85b4" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.527515 4979 scope.go:117] "RemoveContainer" containerID="6488fa6f75b07a884cb1c9e243ae1419c47f4f31507eee906fc6e83084e37e42" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.592107 4979 scope.go:117] "RemoveContainer" containerID="088e2e7d854d7dc05cd4dbe8fe4c7ffcbdee731d873f6f602ab10d8c9fb6c170" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.649568 4979 scope.go:117] "RemoveContainer" containerID="6e9b936da74c87dcee37685c96b2ae5e396a4383a9a39ae0063e6c3ec2306db6" Jan 30 23:21:28 crc kubenswrapper[4979]: I0130 23:21:28.693741 4979 scope.go:117] "RemoveContainer" containerID="7d37ab6343b96618c109dfaf1d8e673f2a0db0f5f37da07bff5cdaeffc9889e5" Jan 30 23:21:29 crc kubenswrapper[4979]: I0130 23:21:29.083000 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2w7hf"] Jan 30 23:21:29 crc kubenswrapper[4979]: I0130 23:21:29.088809 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2w7hf"] Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.164471 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:30 crc kubenswrapper[4979]: E0130 23:21:30.165230 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.165245 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon" Jan 30 23:21:30 crc kubenswrapper[4979]: E0130 23:21:30.165261 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon-log" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.165268 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon-log" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.165478 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.165508 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2ab497-2486-4439-ae5b-c2284f870680" containerName="horizon-log" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.167181 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.185076 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.287937 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9v52\" (UniqueName: \"kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.288250 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.288275 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.389935 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.390012 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.390190 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9v52\" (UniqueName: \"kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.391263 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.391519 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.420785 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9v52\" (UniqueName: \"kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52\") pod \"redhat-marketplace-ncmfr\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:30 crc kubenswrapper[4979]: I0130 23:21:30.499767 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.038959 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.088491 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc87a0f7-9b2b-46ce-a000-c1c5195535d8" path="/var/lib/kubelet/pods/fc87a0f7-9b2b-46ce-a000-c1c5195535d8/volumes" Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.950818 4979 generic.go:334] "Generic (PLEG): container finished" podID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerID="b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d" exitCode=0 Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.951416 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerDied","Data":"b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d"} Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.951513 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerStarted","Data":"3a83afe9c90f101f2c59a963efc80878bb5b9d1c7e7ba87227c5ecf0489009f1"} Jan 30 23:21:31 crc kubenswrapper[4979]: I0130 23:21:31.954519 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 23:21:32 crc kubenswrapper[4979]: E0130 23:21:32.807356 4979 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97022094_b924_4af4_9725_f91da4c8c957.slice/crio-conmon-1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819.scope\": RecentStats: unable to find data in memory cache]" Jan 30 23:21:32 crc kubenswrapper[4979]: I0130 23:21:32.966382 4979 generic.go:334] "Generic (PLEG): container finished" podID="97022094-b924-4af4-9725-f91da4c8c957" containerID="1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819" exitCode=137 Jan 30 23:21:32 crc kubenswrapper[4979]: I0130 23:21:32.967002 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerDied","Data":"1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819"} Jan 30 23:21:32 crc kubenswrapper[4979]: I0130 23:21:32.969423 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerStarted","Data":"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997"} Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.540758 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.680386 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data\") pod \"97022094-b924-4af4-9725-f91da4c8c957\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.681192 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key\") pod \"97022094-b924-4af4-9725-f91da4c8c957\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.681715 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs\") pod \"97022094-b924-4af4-9725-f91da4c8c957\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.682238 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdt2n\" (UniqueName: \"kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n\") pod \"97022094-b924-4af4-9725-f91da4c8c957\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.682343 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts\") pod \"97022094-b924-4af4-9725-f91da4c8c957\" (UID: \"97022094-b924-4af4-9725-f91da4c8c957\") " Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.682434 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs" (OuterVolumeSpecName: "logs") pod "97022094-b924-4af4-9725-f91da4c8c957" (UID: "97022094-b924-4af4-9725-f91da4c8c957"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.683695 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97022094-b924-4af4-9725-f91da4c8c957-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.688203 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n" (OuterVolumeSpecName: "kube-api-access-zdt2n") pod "97022094-b924-4af4-9725-f91da4c8c957" (UID: "97022094-b924-4af4-9725-f91da4c8c957"). InnerVolumeSpecName "kube-api-access-zdt2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.688653 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "97022094-b924-4af4-9725-f91da4c8c957" (UID: "97022094-b924-4af4-9725-f91da4c8c957"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.711897 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data" (OuterVolumeSpecName: "config-data") pod "97022094-b924-4af4-9725-f91da4c8c957" (UID: "97022094-b924-4af4-9725-f91da4c8c957"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.741215 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts" (OuterVolumeSpecName: "scripts") pod "97022094-b924-4af4-9725-f91da4c8c957" (UID: "97022094-b924-4af4-9725-f91da4c8c957"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.785534 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.785588 4979 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97022094-b924-4af4-9725-f91da4c8c957-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.785614 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdt2n\" (UniqueName: \"kubernetes.io/projected/97022094-b924-4af4-9725-f91da4c8c957-kube-api-access-zdt2n\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.785634 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97022094-b924-4af4-9725-f91da4c8c957-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.985508 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6577687c-vbmgt" event={"ID":"97022094-b924-4af4-9725-f91da4c8c957","Type":"ContainerDied","Data":"607207eba03fa8c4a7bd9dafe27a8d6eca3bcb615fa115aa828cf7f4ce14ddae"} Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.985567 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c6577687c-vbmgt" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.985580 4979 scope.go:117] "RemoveContainer" containerID="aabf2fa7c84990e41e200a9091e9e1684120f5c07d0330702a47b20463fc7a2e" Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.988548 4979 generic.go:334] "Generic (PLEG): container finished" podID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerID="5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997" exitCode=0 Jan 30 23:21:33 crc kubenswrapper[4979]: I0130 23:21:33.988592 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerDied","Data":"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997"} Jan 30 23:21:34 crc kubenswrapper[4979]: I0130 23:21:34.052241 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:21:34 crc kubenswrapper[4979]: I0130 23:21:34.060629 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c6577687c-vbmgt"] Jan 30 23:21:34 crc kubenswrapper[4979]: I0130 23:21:34.212297 4979 scope.go:117] "RemoveContainer" containerID="1843ff126303523abb6ffa0db5c21019540d237b2e87b5173765021c3bff4819" Jan 30 23:21:35 crc kubenswrapper[4979]: I0130 23:21:35.006523 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerStarted","Data":"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3"} Jan 30 23:21:35 crc kubenswrapper[4979]: I0130 23:21:35.052747 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ncmfr" podStartSLOduration=2.524742951 podStartE2EDuration="5.052719873s" podCreationTimestamp="2026-01-30 23:21:30 +0000 UTC" firstStartedPulling="2026-01-30 23:21:31.954262239 +0000 UTC m=+6087.915509272" lastFinishedPulling="2026-01-30 23:21:34.482239131 +0000 UTC m=+6090.443486194" observedRunningTime="2026-01-30 23:21:35.034754636 +0000 UTC m=+6090.996001709" watchObservedRunningTime="2026-01-30 23:21:35.052719873 +0000 UTC m=+6091.013966906" Jan 30 23:21:35 crc kubenswrapper[4979]: I0130 23:21:35.086527 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97022094-b924-4af4-9725-f91da4c8c957" path="/var/lib/kubelet/pods/97022094-b924-4af4-9725-f91da4c8c957/volumes" Jan 30 23:21:40 crc kubenswrapper[4979]: I0130 23:21:40.499886 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:40 crc kubenswrapper[4979]: I0130 23:21:40.500428 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:40 crc kubenswrapper[4979]: I0130 23:21:40.546073 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:41 crc kubenswrapper[4979]: I0130 23:21:41.124874 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:41 crc kubenswrapper[4979]: I0130 23:21:41.186312 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.084369 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ncmfr" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="registry-server" containerID="cri-o://09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3" gracePeriod=2 Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.545940 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.548716 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content\") pod \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.548805 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9v52\" (UniqueName: \"kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52\") pod \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.548968 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities\") pod \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\" (UID: \"db7b49f6-c61a-4db0-a0cd-1f91923bb781\") " Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.550541 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities" (OuterVolumeSpecName: "utilities") pod "db7b49f6-c61a-4db0-a0cd-1f91923bb781" (UID: "db7b49f6-c61a-4db0-a0cd-1f91923bb781"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.556304 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52" (OuterVolumeSpecName: "kube-api-access-k9v52") pod "db7b49f6-c61a-4db0-a0cd-1f91923bb781" (UID: "db7b49f6-c61a-4db0-a0cd-1f91923bb781"). InnerVolumeSpecName "kube-api-access-k9v52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.597018 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db7b49f6-c61a-4db0-a0cd-1f91923bb781" (UID: "db7b49f6-c61a-4db0-a0cd-1f91923bb781"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.650772 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.650803 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9v52\" (UniqueName: \"kubernetes.io/projected/db7b49f6-c61a-4db0-a0cd-1f91923bb781-kube-api-access-k9v52\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:43 crc kubenswrapper[4979]: I0130 23:21:43.650816 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7b49f6-c61a-4db0-a0cd-1f91923bb781-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.096115 4979 generic.go:334] "Generic (PLEG): container finished" podID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerID="09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3" exitCode=0 Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.096178 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerDied","Data":"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3"} Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.096207 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncmfr" event={"ID":"db7b49f6-c61a-4db0-a0cd-1f91923bb781","Type":"ContainerDied","Data":"3a83afe9c90f101f2c59a963efc80878bb5b9d1c7e7ba87227c5ecf0489009f1"} Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.096230 4979 scope.go:117] "RemoveContainer" containerID="09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.096253 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncmfr" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.125417 4979 scope.go:117] "RemoveContainer" containerID="5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.154566 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.167988 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncmfr"] Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.179730 4979 scope.go:117] "RemoveContainer" containerID="b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.211245 4979 scope.go:117] "RemoveContainer" containerID="09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.211758 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3\": container with ID starting with 09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3 not found: ID does not exist" containerID="09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.211879 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3"} err="failed to get container status \"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3\": rpc error: code = NotFound desc = could not find container \"09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3\": container with ID starting with 09fa0e0d1bee1f1ec7ec1bf6a5cfd1cecd13691eb9071b83ac60964097b242e3 not found: ID does not exist" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.211962 4979 scope.go:117] "RemoveContainer" containerID="5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.212492 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997\": container with ID starting with 5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997 not found: ID does not exist" containerID="5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.212528 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997"} err="failed to get container status \"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997\": rpc error: code = NotFound desc = could not find container \"5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997\": container with ID starting with 5f7c6a0786e2bcdf8f211c9701660dc1775d1e18a0b20cbf82e62517d8c42997 not found: ID does not exist" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.212547 4979 scope.go:117] "RemoveContainer" containerID="b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.212851 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d\": container with ID starting with b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d not found: ID does not exist" containerID="b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.212905 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d"} err="failed to get container status \"b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d\": rpc error: code = NotFound desc = could not find container \"b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d\": container with ID starting with b41ad83a94b2d8c40dad33d1e48a8d5ab6522bff6d1d4d73228cccb1f815113d not found: ID does not exist" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.871273 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9b688f5c-2xlsg"] Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.871826 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="extract-content" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.871857 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="extract-content" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.872092 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="extract-utilities" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872103 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="extract-utilities" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.872123 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon-log" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872132 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon-log" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.872145 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872153 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" Jan 30 23:21:44 crc kubenswrapper[4979]: E0130 23:21:44.872197 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="registry-server" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872206 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="registry-server" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872461 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon-log" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872484 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="97022094-b924-4af4-9725-f91da4c8c957" containerName="horizon" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.872499 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" containerName="registry-server" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.873891 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.887575 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9b688f5c-2xlsg"] Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.977894 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d199303b-d615-40f9-a420-bfde359d8392-horizon-secret-key\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.977941 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d199303b-d615-40f9-a420-bfde359d8392-logs\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.977988 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-scripts\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.978020 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22wbq\" (UniqueName: \"kubernetes.io/projected/d199303b-d615-40f9-a420-bfde359d8392-kube-api-access-22wbq\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:44 crc kubenswrapper[4979]: I0130 23:21:44.978159 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-config-data\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.079502 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7b49f6-c61a-4db0-a0cd-1f91923bb781" path="/var/lib/kubelet/pods/db7b49f6-c61a-4db0-a0cd-1f91923bb781/volumes" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.079915 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-config-data\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.080145 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d199303b-d615-40f9-a420-bfde359d8392-horizon-secret-key\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.080232 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d199303b-d615-40f9-a420-bfde359d8392-logs\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.080313 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-scripts\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.080391 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22wbq\" (UniqueName: \"kubernetes.io/projected/d199303b-d615-40f9-a420-bfde359d8392-kube-api-access-22wbq\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.081149 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d199303b-d615-40f9-a420-bfde359d8392-logs\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.081168 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-config-data\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.081560 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d199303b-d615-40f9-a420-bfde359d8392-scripts\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.090682 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d199303b-d615-40f9-a420-bfde359d8392-horizon-secret-key\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.100212 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22wbq\" (UniqueName: \"kubernetes.io/projected/d199303b-d615-40f9-a420-bfde359d8392-kube-api-access-22wbq\") pod \"horizon-9b688f5c-2xlsg\" (UID: \"d199303b-d615-40f9-a420-bfde359d8392\") " pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.192346 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:45 crc kubenswrapper[4979]: I0130 23:21:45.643539 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9b688f5c-2xlsg"] Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.115597 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b688f5c-2xlsg" event={"ID":"d199303b-d615-40f9-a420-bfde359d8392","Type":"ContainerStarted","Data":"1f6e61f45ee2c20b9d250631abd7d5b77981ced17dc48e9e83853de337cfd740"} Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.115963 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b688f5c-2xlsg" event={"ID":"d199303b-d615-40f9-a420-bfde359d8392","Type":"ContainerStarted","Data":"f7b9808af55c261ce3a86f8e914ffe1eda713e5129a0c40feee8fa197969f4e6"} Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.115979 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b688f5c-2xlsg" event={"ID":"d199303b-d615-40f9-a420-bfde359d8392","Type":"ContainerStarted","Data":"e76dbefd9375959ed08c6ef182fa3b9ba46bb6e9ca794b8b13ca8ec8394d76a8"} Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.149483 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9b688f5c-2xlsg" podStartSLOduration=2.149459837 podStartE2EDuration="2.149459837s" podCreationTimestamp="2026-01-30 23:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:21:46.147476223 +0000 UTC m=+6102.108723256" watchObservedRunningTime="2026-01-30 23:21:46.149459837 +0000 UTC m=+6102.110706870" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.243422 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-vjhff"] Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.245205 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.257138 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-vjhff"] Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.407595 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsjkh\" (UniqueName: \"kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.407850 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.447433 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-90f9-account-create-update-f758c"] Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.449087 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.455857 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-90f9-account-create-update-f758c"] Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.482785 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.509935 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.510075 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsjkh\" (UniqueName: \"kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.510917 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.529691 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsjkh\" (UniqueName: \"kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh\") pod \"heat-db-create-vjhff\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.580481 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-vjhff" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.611811 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjw27\" (UniqueName: \"kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.612482 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.714001 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjw27\" (UniqueName: \"kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.714624 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.715795 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.734652 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjw27\" (UniqueName: \"kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27\") pod \"heat-90f9-account-create-update-f758c\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:46 crc kubenswrapper[4979]: I0130 23:21:46.794095 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:47 crc kubenswrapper[4979]: I0130 23:21:47.064576 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-vjhff"] Jan 30 23:21:47 crc kubenswrapper[4979]: I0130 23:21:47.189055 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-vjhff" event={"ID":"e764deeb-609a-4c01-8e75-729988b54849","Type":"ContainerStarted","Data":"60d1ddd9bb063feb469224e320c87fd0ea28242283125ecdcd769faaab57aa0b"} Jan 30 23:21:47 crc kubenswrapper[4979]: W0130 23:21:47.276254 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b4b69e9_3082_4eac_a4c9_2fd308ed75bd.slice/crio-96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb WatchSource:0}: Error finding container 96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb: Status 404 returned error can't find the container with id 96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb Jan 30 23:21:47 crc kubenswrapper[4979]: I0130 23:21:47.280819 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-90f9-account-create-update-f758c"] Jan 30 23:21:48 crc kubenswrapper[4979]: I0130 23:21:48.213390 4979 generic.go:334] "Generic (PLEG): container finished" podID="e764deeb-609a-4c01-8e75-729988b54849" containerID="4bf379d2ade37e9d1e0a22eab217d802e3a8854982275953c74bf158307b26eb" exitCode=0 Jan 30 23:21:48 crc kubenswrapper[4979]: I0130 23:21:48.213457 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-vjhff" event={"ID":"e764deeb-609a-4c01-8e75-729988b54849","Type":"ContainerDied","Data":"4bf379d2ade37e9d1e0a22eab217d802e3a8854982275953c74bf158307b26eb"} Jan 30 23:21:48 crc kubenswrapper[4979]: I0130 23:21:48.222525 4979 generic.go:334] "Generic (PLEG): container finished" podID="3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" containerID="58163cfecf1e6d2fed441241595c3b510d0c4b0a9adfab7fece442a3238e97f0" exitCode=0 Jan 30 23:21:48 crc kubenswrapper[4979]: I0130 23:21:48.222566 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-90f9-account-create-update-f758c" event={"ID":"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd","Type":"ContainerDied","Data":"58163cfecf1e6d2fed441241595c3b510d0c4b0a9adfab7fece442a3238e97f0"} Jan 30 23:21:48 crc kubenswrapper[4979]: I0130 23:21:48.222596 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-90f9-account-create-update-f758c" event={"ID":"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd","Type":"ContainerStarted","Data":"96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb"} Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.747041 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.752315 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-vjhff" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890147 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjw27\" (UniqueName: \"kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27\") pod \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890220 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts\") pod \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\" (UID: \"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd\") " Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890292 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts\") pod \"e764deeb-609a-4c01-8e75-729988b54849\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890344 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsjkh\" (UniqueName: \"kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh\") pod \"e764deeb-609a-4c01-8e75-729988b54849\" (UID: \"e764deeb-609a-4c01-8e75-729988b54849\") " Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890862 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" (UID: "3b4b69e9-3082-4eac-a4c9-2fd308ed75bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.890939 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e764deeb-609a-4c01-8e75-729988b54849" (UID: "e764deeb-609a-4c01-8e75-729988b54849"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.895656 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27" (OuterVolumeSpecName: "kube-api-access-bjw27") pod "3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" (UID: "3b4b69e9-3082-4eac-a4c9-2fd308ed75bd"). InnerVolumeSpecName "kube-api-access-bjw27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.896472 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh" (OuterVolumeSpecName: "kube-api-access-wsjkh") pod "e764deeb-609a-4c01-8e75-729988b54849" (UID: "e764deeb-609a-4c01-8e75-729988b54849"). InnerVolumeSpecName "kube-api-access-wsjkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.993238 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e764deeb-609a-4c01-8e75-729988b54849-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.993292 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsjkh\" (UniqueName: \"kubernetes.io/projected/e764deeb-609a-4c01-8e75-729988b54849-kube-api-access-wsjkh\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.993319 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjw27\" (UniqueName: \"kubernetes.io/projected/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-kube-api-access-bjw27\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:49 crc kubenswrapper[4979]: I0130 23:21:49.993336 4979 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.240870 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-vjhff" event={"ID":"e764deeb-609a-4c01-8e75-729988b54849","Type":"ContainerDied","Data":"60d1ddd9bb063feb469224e320c87fd0ea28242283125ecdcd769faaab57aa0b"} Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.240896 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-vjhff" Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.240912 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60d1ddd9bb063feb469224e320c87fd0ea28242283125ecdcd769faaab57aa0b" Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.242327 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-90f9-account-create-update-f758c" event={"ID":"3b4b69e9-3082-4eac-a4c9-2fd308ed75bd","Type":"ContainerDied","Data":"96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb"} Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.242423 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96fceb25d6b92dd55f91ea8aa6c30654eb2054c642d87e18e128da7918b15abb" Jan 30 23:21:50 crc kubenswrapper[4979]: I0130 23:21:50.242397 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-90f9-account-create-update-f758c" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.516308 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-lhhst"] Jan 30 23:21:51 crc kubenswrapper[4979]: E0130 23:21:51.517223 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" containerName="mariadb-account-create-update" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.517246 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" containerName="mariadb-account-create-update" Jan 30 23:21:51 crc kubenswrapper[4979]: E0130 23:21:51.517291 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e764deeb-609a-4c01-8e75-729988b54849" containerName="mariadb-database-create" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.517304 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e764deeb-609a-4c01-8e75-729988b54849" containerName="mariadb-database-create" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.517666 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" containerName="mariadb-account-create-update" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.517702 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e764deeb-609a-4c01-8e75-729988b54849" containerName="mariadb-database-create" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.518717 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.522162 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4r7vf" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.522840 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.532673 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-lhhst"] Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.627249 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbk5r\" (UniqueName: \"kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.627296 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.627323 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.730305 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.730781 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbk5r\" (UniqueName: \"kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.730918 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.737009 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.744091 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.747160 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbk5r\" (UniqueName: \"kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r\") pod \"heat-db-sync-lhhst\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:51 crc kubenswrapper[4979]: I0130 23:21:51.915831 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lhhst" Jan 30 23:21:52 crc kubenswrapper[4979]: I0130 23:21:52.388161 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-lhhst"] Jan 30 23:21:52 crc kubenswrapper[4979]: W0130 23:21:52.391375 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e6a3c61_50ef_48b5_bcc0_ab3374693979.slice/crio-856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a WatchSource:0}: Error finding container 856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a: Status 404 returned error can't find the container with id 856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a Jan 30 23:21:53 crc kubenswrapper[4979]: I0130 23:21:53.280362 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lhhst" event={"ID":"4e6a3c61-50ef-48b5-bcc0-ab3374693979","Type":"ContainerStarted","Data":"856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a"} Jan 30 23:21:55 crc kubenswrapper[4979]: I0130 23:21:55.192743 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:55 crc kubenswrapper[4979]: I0130 23:21:55.193215 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:21:55 crc kubenswrapper[4979]: I0130 23:21:55.195112 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-9b688f5c-2xlsg" podUID="d199303b-d615-40f9-a420-bfde359d8392" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.120:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.120:8080: connect: connection refused" Jan 30 23:21:59 crc kubenswrapper[4979]: I0130 23:21:59.370861 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lhhst" event={"ID":"4e6a3c61-50ef-48b5-bcc0-ab3374693979","Type":"ContainerStarted","Data":"c67c788d4520c8623a63e6f6ba906d43acdb20876d211c331df4d5a9e42eee7e"} Jan 30 23:21:59 crc kubenswrapper[4979]: I0130 23:21:59.414167 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-lhhst" podStartSLOduration=1.970073687 podStartE2EDuration="8.414139736s" podCreationTimestamp="2026-01-30 23:21:51 +0000 UTC" firstStartedPulling="2026-01-30 23:21:52.3946203 +0000 UTC m=+6108.355867333" lastFinishedPulling="2026-01-30 23:21:58.838686339 +0000 UTC m=+6114.799933382" observedRunningTime="2026-01-30 23:21:59.398005709 +0000 UTC m=+6115.359252782" watchObservedRunningTime="2026-01-30 23:21:59.414139736 +0000 UTC m=+6115.375386799" Jan 30 23:22:01 crc kubenswrapper[4979]: I0130 23:22:01.392369 4979 generic.go:334] "Generic (PLEG): container finished" podID="4e6a3c61-50ef-48b5-bcc0-ab3374693979" containerID="c67c788d4520c8623a63e6f6ba906d43acdb20876d211c331df4d5a9e42eee7e" exitCode=0 Jan 30 23:22:01 crc kubenswrapper[4979]: I0130 23:22:01.392460 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lhhst" event={"ID":"4e6a3c61-50ef-48b5-bcc0-ab3374693979","Type":"ContainerDied","Data":"c67c788d4520c8623a63e6f6ba906d43acdb20876d211c331df4d5a9e42eee7e"} Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.846126 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lhhst" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.884988 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle\") pod \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.885644 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbk5r\" (UniqueName: \"kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r\") pod \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.885920 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data\") pod \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\" (UID: \"4e6a3c61-50ef-48b5-bcc0-ab3374693979\") " Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.896060 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r" (OuterVolumeSpecName: "kube-api-access-nbk5r") pod "4e6a3c61-50ef-48b5-bcc0-ab3374693979" (UID: "4e6a3c61-50ef-48b5-bcc0-ab3374693979"). InnerVolumeSpecName "kube-api-access-nbk5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.917428 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e6a3c61-50ef-48b5-bcc0-ab3374693979" (UID: "4e6a3c61-50ef-48b5-bcc0-ab3374693979"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.964748 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data" (OuterVolumeSpecName: "config-data") pod "4e6a3c61-50ef-48b5-bcc0-ab3374693979" (UID: "4e6a3c61-50ef-48b5-bcc0-ab3374693979"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.988858 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.988884 4979 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6a3c61-50ef-48b5-bcc0-ab3374693979-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:02 crc kubenswrapper[4979]: I0130 23:22:02.988895 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbk5r\" (UniqueName: \"kubernetes.io/projected/4e6a3c61-50ef-48b5-bcc0-ab3374693979-kube-api-access-nbk5r\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:03 crc kubenswrapper[4979]: I0130 23:22:03.416294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lhhst" event={"ID":"4e6a3c61-50ef-48b5-bcc0-ab3374693979","Type":"ContainerDied","Data":"856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a"} Jan 30 23:22:03 crc kubenswrapper[4979]: I0130 23:22:03.416687 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="856b218d5e77c1cc97112b233f6bb15e4484a7c4d91e34a37878ff0f27d4422a" Jan 30 23:22:03 crc kubenswrapper[4979]: I0130 23:22:03.416747 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lhhst" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.541084 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5998d4684d-smdfx"] Jan 30 23:22:04 crc kubenswrapper[4979]: E0130 23:22:04.541956 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6a3c61-50ef-48b5-bcc0-ab3374693979" containerName="heat-db-sync" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.541971 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6a3c61-50ef-48b5-bcc0-ab3374693979" containerName="heat-db-sync" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.542185 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6a3c61-50ef-48b5-bcc0-ab3374693979" containerName="heat-db-sync" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.542985 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.549251 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.549763 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4r7vf" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.556337 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.578115 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5998d4684d-smdfx"] Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.633911 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7bf56f7748-njbm7"] Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.635501 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.637814 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.651503 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hqqm\" (UniqueName: \"kubernetes.io/projected/b2612383-27a6-4663-b45a-0aac825bf021-kube-api-access-8hqqm\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.651605 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data-custom\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.651630 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.651723 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-combined-ca-bundle\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.656064 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7bf56f7748-njbm7"] Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.693921 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-d7d58dff5-tjkx9"] Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.695218 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.701538 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.724197 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-d7d58dff5-tjkx9"] Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753517 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data-custom\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753576 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpq5q\" (UniqueName: \"kubernetes.io/projected/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-kube-api-access-gpq5q\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753639 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-combined-ca-bundle\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753684 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hqqm\" (UniqueName: \"kubernetes.io/projected/b2612383-27a6-4663-b45a-0aac825bf021-kube-api-access-8hqqm\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753710 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-combined-ca-bundle\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753739 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-combined-ca-bundle\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753765 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753784 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753840 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfv82\" (UniqueName: \"kubernetes.io/projected/d72b8dbc-f35e-4aea-ab91-75be38745fd1-kube-api-access-dfv82\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753862 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data-custom\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753887 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.753921 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data-custom\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.762891 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-combined-ca-bundle\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.764456 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.766013 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2612383-27a6-4663-b45a-0aac825bf021-config-data-custom\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.776780 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hqqm\" (UniqueName: \"kubernetes.io/projected/b2612383-27a6-4663-b45a-0aac825bf021-kube-api-access-8hqqm\") pod \"heat-engine-5998d4684d-smdfx\" (UID: \"b2612383-27a6-4663-b45a-0aac825bf021\") " pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855654 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfv82\" (UniqueName: \"kubernetes.io/projected/d72b8dbc-f35e-4aea-ab91-75be38745fd1-kube-api-access-dfv82\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855750 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data-custom\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855798 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data-custom\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855831 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpq5q\" (UniqueName: \"kubernetes.io/projected/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-kube-api-access-gpq5q\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855938 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-combined-ca-bundle\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.855967 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-combined-ca-bundle\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.857069 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.857103 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.861506 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data-custom\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.862248 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data-custom\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.863416 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-config-data\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.870012 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72b8dbc-f35e-4aea-ab91-75be38745fd1-combined-ca-bundle\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.883178 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.885819 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-combined-ca-bundle\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.886693 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfv82\" (UniqueName: \"kubernetes.io/projected/d72b8dbc-f35e-4aea-ab91-75be38745fd1-kube-api-access-dfv82\") pod \"heat-cfnapi-7bf56f7748-njbm7\" (UID: \"d72b8dbc-f35e-4aea-ab91-75be38745fd1\") " pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.889080 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpq5q\" (UniqueName: \"kubernetes.io/projected/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-kube-api-access-gpq5q\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.890749 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608b4783-d5c9-467f-9a08-9cd6bc0f0fa9-config-data\") pod \"heat-api-d7d58dff5-tjkx9\" (UID: \"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9\") " pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:04 crc kubenswrapper[4979]: I0130 23:22:04.952727 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:05 crc kubenswrapper[4979]: I0130 23:22:05.020945 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:05 crc kubenswrapper[4979]: I0130 23:22:05.372625 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5998d4684d-smdfx"] Jan 30 23:22:05 crc kubenswrapper[4979]: I0130 23:22:05.433282 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5998d4684d-smdfx" event={"ID":"b2612383-27a6-4663-b45a-0aac825bf021","Type":"ContainerStarted","Data":"fe737396b69b4aed83691d8dc360b772469f6a98ced0405fc07ff76df54edf3b"} Jan 30 23:22:05 crc kubenswrapper[4979]: I0130 23:22:05.501895 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7bf56f7748-njbm7"] Jan 30 23:22:05 crc kubenswrapper[4979]: I0130 23:22:05.603545 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-d7d58dff5-tjkx9"] Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.446220 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5998d4684d-smdfx" event={"ID":"b2612383-27a6-4663-b45a-0aac825bf021","Type":"ContainerStarted","Data":"37a22131ae6f3bb74679c57ec9465a1a9b90bc31493ee4bd6dd8bdfd06af1633"} Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.446756 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.451017 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d7d58dff5-tjkx9" event={"ID":"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9","Type":"ContainerStarted","Data":"29f08cf5d30ce96408e1e27a14ec3e1f056ea248f2705b3c4ee30fc38ccdc476"} Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.452664 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" event={"ID":"d72b8dbc-f35e-4aea-ab91-75be38745fd1","Type":"ContainerStarted","Data":"4fa148c89cef593af9ed94070999f72545c6fc344e3f54badefcc61b96b1c9e4"} Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.473548 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5998d4684d-smdfx" podStartSLOduration=2.473530759 podStartE2EDuration="2.473530759s" podCreationTimestamp="2026-01-30 23:22:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:22:06.467684731 +0000 UTC m=+6122.428931764" watchObservedRunningTime="2026-01-30 23:22:06.473530759 +0000 UTC m=+6122.434777792" Jan 30 23:22:06 crc kubenswrapper[4979]: I0130 23:22:06.903485 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.477768 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" event={"ID":"d72b8dbc-f35e-4aea-ab91-75be38745fd1","Type":"ContainerStarted","Data":"9514a5344c062ab05c3963f7f5cd2b99667597051b62ff1e6ab7871bac553473"} Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.478312 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.480702 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-d7d58dff5-tjkx9" event={"ID":"608b4783-d5c9-467f-9a08-9cd6bc0f0fa9","Type":"ContainerStarted","Data":"571a61434f246168d8a2aa5e4f9ede97bfdb87a29dfa451f8a0e1145fbcfecaf"} Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.480876 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.493868 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" podStartSLOduration=2.455823351 podStartE2EDuration="4.493852509s" podCreationTimestamp="2026-01-30 23:22:04 +0000 UTC" firstStartedPulling="2026-01-30 23:22:05.511964661 +0000 UTC m=+6121.473211694" lastFinishedPulling="2026-01-30 23:22:07.549993809 +0000 UTC m=+6123.511240852" observedRunningTime="2026-01-30 23:22:08.492185244 +0000 UTC m=+6124.453432277" watchObservedRunningTime="2026-01-30 23:22:08.493852509 +0000 UTC m=+6124.455099542" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.517046 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-d7d58dff5-tjkx9" podStartSLOduration=2.57397388 podStartE2EDuration="4.517002766s" podCreationTimestamp="2026-01-30 23:22:04 +0000 UTC" firstStartedPulling="2026-01-30 23:22:05.612697878 +0000 UTC m=+6121.573944911" lastFinishedPulling="2026-01-30 23:22:07.555726764 +0000 UTC m=+6123.516973797" observedRunningTime="2026-01-30 23:22:08.515165546 +0000 UTC m=+6124.476412599" watchObservedRunningTime="2026-01-30 23:22:08.517002766 +0000 UTC m=+6124.478249819" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.800224 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-9b688f5c-2xlsg" Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.888268 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.888505 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon-log" containerID="cri-o://39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83" gracePeriod=30 Jan 30 23:22:08 crc kubenswrapper[4979]: I0130 23:22:08.888576 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" containerID="cri-o://f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd" gracePeriod=30 Jan 30 23:22:10 crc kubenswrapper[4979]: I0130 23:22:10.050741 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ds8kf"] Jan 30 23:22:10 crc kubenswrapper[4979]: I0130 23:22:10.059389 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ds8kf"] Jan 30 23:22:11 crc kubenswrapper[4979]: I0130 23:22:11.044254 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-dc54-account-create-update-qv7gj"] Jan 30 23:22:11 crc kubenswrapper[4979]: I0130 23:22:11.051188 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-dc54-account-create-update-qv7gj"] Jan 30 23:22:11 crc kubenswrapper[4979]: I0130 23:22:11.083565 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59dad3f6-f4ce-4ce7-8364-044694d448f1" path="/var/lib/kubelet/pods/59dad3f6-f4ce-4ce7-8364-044694d448f1/volumes" Jan 30 23:22:11 crc kubenswrapper[4979]: I0130 23:22:11.086101 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90346f0c-7cc3-4f3c-a29f-9b7265eff703" path="/var/lib/kubelet/pods/90346f0c-7cc3-4f3c-a29f-9b7265eff703/volumes" Jan 30 23:22:12 crc kubenswrapper[4979]: I0130 23:22:12.054393 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:57726->10.217.1.116:8080: read: connection reset by peer" Jan 30 23:22:12 crc kubenswrapper[4979]: I0130 23:22:12.514836 4979 generic.go:334] "Generic (PLEG): container finished" podID="9525bd9a-233e-4207-ac68-26491c2debf7" containerID="f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd" exitCode=0 Jan 30 23:22:12 crc kubenswrapper[4979]: I0130 23:22:12.514883 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerDied","Data":"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd"} Jan 30 23:22:16 crc kubenswrapper[4979]: I0130 23:22:16.383454 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7bf56f7748-njbm7" Jan 30 23:22:16 crc kubenswrapper[4979]: I0130 23:22:16.711004 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-d7d58dff5-tjkx9" Jan 30 23:22:18 crc kubenswrapper[4979]: I0130 23:22:18.952391 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 23:22:19 crc kubenswrapper[4979]: I0130 23:22:19.045285 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xz2wl"] Jan 30 23:22:19 crc kubenswrapper[4979]: I0130 23:22:19.059231 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xz2wl"] Jan 30 23:22:19 crc kubenswrapper[4979]: I0130 23:22:19.081994 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531879a6-b909-4e84-bb7d-9d4e94c5e7f4" path="/var/lib/kubelet/pods/531879a6-b909-4e84-bb7d-9d4e94c5e7f4/volumes" Jan 30 23:22:24 crc kubenswrapper[4979]: I0130 23:22:24.925910 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5998d4684d-smdfx" Jan 30 23:22:28 crc kubenswrapper[4979]: I0130 23:22:28.952126 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 23:22:28 crc kubenswrapper[4979]: I0130 23:22:28.952869 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:22:28 crc kubenswrapper[4979]: I0130 23:22:28.964523 4979 scope.go:117] "RemoveContainer" containerID="11a0850d9a45d42789690dcbd834d11974f8f84ffa75d002d1ee278eca1fedce" Jan 30 23:22:29 crc kubenswrapper[4979]: I0130 23:22:29.024223 4979 scope.go:117] "RemoveContainer" containerID="f34f6ceb54e852f4c063e802dbc7f5e8ca92abd2aab5ad9f8f928c8ae9b4ca33" Jan 30 23:22:29 crc kubenswrapper[4979]: I0130 23:22:29.055891 4979 scope.go:117] "RemoveContainer" containerID="75374755f204d179d6df7eb604fb78fdccdb8d7da4cf6f4f7c48a481ad71d134" Jan 30 23:22:29 crc kubenswrapper[4979]: I0130 23:22:29.106095 4979 scope.go:117] "RemoveContainer" containerID="22f97911fc2dfbe2d7800553503f0c8338bac7f33443e8a617f5b406e5bdc412" Jan 30 23:22:29 crc kubenswrapper[4979]: I0130 23:22:29.143603 4979 scope.go:117] "RemoveContainer" containerID="1029e32864f04940f8e059d045d4582115f310e97f7c3c3262b89f2a7fc67ed7" Jan 30 23:22:29 crc kubenswrapper[4979]: I0130 23:22:29.199197 4979 scope.go:117] "RemoveContainer" containerID="6fa2af1e71f672ff07c7a8ecac5619dbc74e480f88cacfdf3bd6126656652ae7" Jan 30 23:22:32 crc kubenswrapper[4979]: I0130 23:22:32.040273 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:22:32 crc kubenswrapper[4979]: I0130 23:22:32.040764 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:22:33 crc kubenswrapper[4979]: I0130 23:22:33.944925 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26"] Jan 30 23:22:33 crc kubenswrapper[4979]: I0130 23:22:33.948813 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:33 crc kubenswrapper[4979]: I0130 23:22:33.951182 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 23:22:33 crc kubenswrapper[4979]: I0130 23:22:33.956593 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26"] Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.021294 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbnx8\" (UniqueName: \"kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.021403 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.021466 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.123404 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbnx8\" (UniqueName: \"kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.123500 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.123553 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.124020 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.124233 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.142807 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbnx8\" (UniqueName: \"kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.270374 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:34 crc kubenswrapper[4979]: I0130 23:22:34.774187 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26"] Jan 30 23:22:35 crc kubenswrapper[4979]: I0130 23:22:35.765951 4979 generic.go:334] "Generic (PLEG): container finished" podID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerID="c67e827feac05e26c7e512c2abe1c7b831a97b5c4a93c7b4bffabaaf66afa66f" exitCode=0 Jan 30 23:22:35 crc kubenswrapper[4979]: I0130 23:22:35.766396 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" event={"ID":"a4719f7f-2493-47b2-bd3d-3d2edecf2e00","Type":"ContainerDied","Data":"c67e827feac05e26c7e512c2abe1c7b831a97b5c4a93c7b4bffabaaf66afa66f"} Jan 30 23:22:35 crc kubenswrapper[4979]: I0130 23:22:35.766437 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" event={"ID":"a4719f7f-2493-47b2-bd3d-3d2edecf2e00","Type":"ContainerStarted","Data":"85d37a1e47cc6b63a841c62202148a7a9534288c203c8f8849daaae19dd83a95"} Jan 30 23:22:37 crc kubenswrapper[4979]: I0130 23:22:37.802720 4979 generic.go:334] "Generic (PLEG): container finished" podID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerID="384d532c09aa7b786695a05ab4667146ac46516af403e628f281d37a11af7e0a" exitCode=0 Jan 30 23:22:37 crc kubenswrapper[4979]: I0130 23:22:37.802780 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" event={"ID":"a4719f7f-2493-47b2-bd3d-3d2edecf2e00","Type":"ContainerDied","Data":"384d532c09aa7b786695a05ab4667146ac46516af403e628f281d37a11af7e0a"} Jan 30 23:22:38 crc kubenswrapper[4979]: I0130 23:22:38.815965 4979 generic.go:334] "Generic (PLEG): container finished" podID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerID="c150996f315cfa3b1078f2bd0f329308dc6a001e61c90189f13d6f91d139ae44" exitCode=0 Jan 30 23:22:38 crc kubenswrapper[4979]: I0130 23:22:38.816124 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" event={"ID":"a4719f7f-2493-47b2-bd3d-3d2edecf2e00","Type":"ContainerDied","Data":"c150996f315cfa3b1078f2bd0f329308dc6a001e61c90189f13d6f91d139ae44"} Jan 30 23:22:38 crc kubenswrapper[4979]: I0130 23:22:38.953832 4979 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66977458c7-msp58" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.370838 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.458728 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs\") pod \"9525bd9a-233e-4207-ac68-26491c2debf7\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459089 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts\") pod \"9525bd9a-233e-4207-ac68-26491c2debf7\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459137 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data\") pod \"9525bd9a-233e-4207-ac68-26491c2debf7\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459252 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g48z6\" (UniqueName: \"kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6\") pod \"9525bd9a-233e-4207-ac68-26491c2debf7\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459335 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key\") pod \"9525bd9a-233e-4207-ac68-26491c2debf7\" (UID: \"9525bd9a-233e-4207-ac68-26491c2debf7\") " Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459242 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs" (OuterVolumeSpecName: "logs") pod "9525bd9a-233e-4207-ac68-26491c2debf7" (UID: "9525bd9a-233e-4207-ac68-26491c2debf7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.459936 4979 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9525bd9a-233e-4207-ac68-26491c2debf7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.464328 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9525bd9a-233e-4207-ac68-26491c2debf7" (UID: "9525bd9a-233e-4207-ac68-26491c2debf7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.466167 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6" (OuterVolumeSpecName: "kube-api-access-g48z6") pod "9525bd9a-233e-4207-ac68-26491c2debf7" (UID: "9525bd9a-233e-4207-ac68-26491c2debf7"). InnerVolumeSpecName "kube-api-access-g48z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.486624 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data" (OuterVolumeSpecName: "config-data") pod "9525bd9a-233e-4207-ac68-26491c2debf7" (UID: "9525bd9a-233e-4207-ac68-26491c2debf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.486820 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts" (OuterVolumeSpecName: "scripts") pod "9525bd9a-233e-4207-ac68-26491c2debf7" (UID: "9525bd9a-233e-4207-ac68-26491c2debf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.562018 4979 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.562073 4979 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9525bd9a-233e-4207-ac68-26491c2debf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.562084 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g48z6\" (UniqueName: \"kubernetes.io/projected/9525bd9a-233e-4207-ac68-26491c2debf7-kube-api-access-g48z6\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.562095 4979 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9525bd9a-233e-4207-ac68-26491c2debf7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.826101 4979 generic.go:334] "Generic (PLEG): container finished" podID="9525bd9a-233e-4207-ac68-26491c2debf7" containerID="39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83" exitCode=137 Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.826145 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerDied","Data":"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83"} Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.826177 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66977458c7-msp58" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.826197 4979 scope.go:117] "RemoveContainer" containerID="f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd" Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.826186 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66977458c7-msp58" event={"ID":"9525bd9a-233e-4207-ac68-26491c2debf7","Type":"ContainerDied","Data":"a3118c5f9754fea70be47aaa37b81d63a2dc940a51c7c4b8fc9c9e7b5a9b37c4"} Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.869426 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:22:39 crc kubenswrapper[4979]: I0130 23:22:39.878870 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66977458c7-msp58"] Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.016476 4979 scope.go:117] "RemoveContainer" containerID="39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.034998 4979 scope.go:117] "RemoveContainer" containerID="f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd" Jan 30 23:22:40 crc kubenswrapper[4979]: E0130 23:22:40.035529 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd\": container with ID starting with f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd not found: ID does not exist" containerID="f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.035573 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd"} err="failed to get container status \"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd\": rpc error: code = NotFound desc = could not find container \"f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd\": container with ID starting with f300d051a6c49611205aef7e6478d0870269aedadda32295081ad851e49886fd not found: ID does not exist" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.035603 4979 scope.go:117] "RemoveContainer" containerID="39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83" Jan 30 23:22:40 crc kubenswrapper[4979]: E0130 23:22:40.035839 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83\": container with ID starting with 39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83 not found: ID does not exist" containerID="39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.035894 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83"} err="failed to get container status \"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83\": rpc error: code = NotFound desc = could not find container \"39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83\": container with ID starting with 39a396bd9898679e9a5ffbb7aa4c343e6f448310479d1dac6b1b310a559cfe83 not found: ID does not exist" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.158342 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.279591 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util\") pod \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.280106 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbnx8\" (UniqueName: \"kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8\") pod \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.280292 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle\") pod \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\" (UID: \"a4719f7f-2493-47b2-bd3d-3d2edecf2e00\") " Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.283459 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle" (OuterVolumeSpecName: "bundle") pod "a4719f7f-2493-47b2-bd3d-3d2edecf2e00" (UID: "a4719f7f-2493-47b2-bd3d-3d2edecf2e00"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.284447 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8" (OuterVolumeSpecName: "kube-api-access-xbnx8") pod "a4719f7f-2493-47b2-bd3d-3d2edecf2e00" (UID: "a4719f7f-2493-47b2-bd3d-3d2edecf2e00"). InnerVolumeSpecName "kube-api-access-xbnx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.383591 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbnx8\" (UniqueName: \"kubernetes.io/projected/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-kube-api-access-xbnx8\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.383643 4979 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.423373 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util" (OuterVolumeSpecName: "util") pod "a4719f7f-2493-47b2-bd3d-3d2edecf2e00" (UID: "a4719f7f-2493-47b2-bd3d-3d2edecf2e00"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.485719 4979 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4719f7f-2493-47b2-bd3d-3d2edecf2e00-util\") on node \"crc\" DevicePath \"\"" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.853152 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" event={"ID":"a4719f7f-2493-47b2-bd3d-3d2edecf2e00","Type":"ContainerDied","Data":"85d37a1e47cc6b63a841c62202148a7a9534288c203c8f8849daaae19dd83a95"} Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.853233 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85d37a1e47cc6b63a841c62202148a7a9534288c203c8f8849daaae19dd83a95" Jan 30 23:22:40 crc kubenswrapper[4979]: I0130 23:22:40.854070 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26" Jan 30 23:22:41 crc kubenswrapper[4979]: I0130 23:22:41.084158 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" path="/var/lib/kubelet/pods/9525bd9a-233e-4207-ac68-26491c2debf7/volumes" Jan 30 23:22:48 crc kubenswrapper[4979]: I0130 23:22:48.092433 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6fj56"] Jan 30 23:22:48 crc kubenswrapper[4979]: I0130 23:22:48.100634 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6fj56"] Jan 30 23:22:49 crc kubenswrapper[4979]: I0130 23:22:49.048281 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5d73-account-create-update-kh7g2"] Jan 30 23:22:49 crc kubenswrapper[4979]: I0130 23:22:49.066125 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5d73-account-create-update-kh7g2"] Jan 30 23:22:49 crc kubenswrapper[4979]: I0130 23:22:49.081413 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b37754-6d06-4d68-bf4b-34b553d5750e" path="/var/lib/kubelet/pods/26b37754-6d06-4d68-bf4b-34b553d5750e/volumes" Jan 30 23:22:49 crc kubenswrapper[4979]: I0130 23:22:49.082184 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7ef2a65-30bc-4af2-aa45-16b8b793359c" path="/var/lib/kubelet/pods/d7ef2a65-30bc-4af2-aa45-16b8b793359c/volumes" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.530435 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4"] Jan 30 23:22:52 crc kubenswrapper[4979]: E0130 23:22:52.531358 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon-log" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531374 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon-log" Jan 30 23:22:52 crc kubenswrapper[4979]: E0130 23:22:52.531394 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531400 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" Jan 30 23:22:52 crc kubenswrapper[4979]: E0130 23:22:52.531424 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="util" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531430 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="util" Jan 30 23:22:52 crc kubenswrapper[4979]: E0130 23:22:52.531441 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="extract" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531446 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="extract" Jan 30 23:22:52 crc kubenswrapper[4979]: E0130 23:22:52.531457 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="pull" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531463 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="pull" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531646 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon-log" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531663 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="9525bd9a-233e-4207-ac68-26491c2debf7" containerName="horizon" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.531684 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4719f7f-2493-47b2-bd3d-3d2edecf2e00" containerName="extract" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.532393 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.533960 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-wttfk" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.534197 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.536838 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.557780 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.655435 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.656759 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqsx\" (UniqueName: \"kubernetes.io/projected/be7dff91-b79d-4a99-a43b-9cc4a9894cda-kube-api-access-nqqsx\") pod \"obo-prometheus-operator-68bc856cb9-t8db4\" (UID: \"be7dff91-b79d-4a99-a43b-9cc4a9894cda\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.660799 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.663154 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.663388 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qkjpq" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.708479 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.728953 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.730566 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.758200 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.758295 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqsx\" (UniqueName: \"kubernetes.io/projected/be7dff91-b79d-4a99-a43b-9cc4a9894cda-kube-api-access-nqqsx\") pod \"obo-prometheus-operator-68bc856cb9-t8db4\" (UID: \"be7dff91-b79d-4a99-a43b-9cc4a9894cda\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.758324 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.793345 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqsx\" (UniqueName: \"kubernetes.io/projected/be7dff91-b79d-4a99-a43b-9cc4a9894cda-kube-api-access-nqqsx\") pod \"obo-prometheus-operator-68bc856cb9-t8db4\" (UID: \"be7dff91-b79d-4a99-a43b-9cc4a9894cda\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.807387 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.856004 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.861112 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.861216 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.861258 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.861325 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.869533 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.869705 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/800342ba-21de-4a0e-849e-695bd71885b9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d\" (UID: \"800342ba-21de-4a0e-849e-695bd71885b9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.894929 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5c445"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.896507 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.903094 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-nd58h" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.903290 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.921891 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5c445"] Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.963304 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.963625 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.969002 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:52 crc kubenswrapper[4979]: I0130 23:22:52.971604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0c76d26-1e50-4da5-8774-dde557bb1c50-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w\" (UID: \"a0c76d26-1e50-4da5-8774-dde557bb1c50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.037766 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.065617 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm7lj\" (UniqueName: \"kubernetes.io/projected/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-kube-api-access-sm7lj\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.065914 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.088337 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.100263 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-99mbt"] Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.101745 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.121160 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-gq6rn" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.146174 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-99mbt"] Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.167942 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6sk\" (UniqueName: \"kubernetes.io/projected/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-kube-api-access-lt6sk\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.168110 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm7lj\" (UniqueName: \"kubernetes.io/projected/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-kube-api-access-sm7lj\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.168128 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-openshift-service-ca\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.168151 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.172439 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.210857 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm7lj\" (UniqueName: \"kubernetes.io/projected/c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9-kube-api-access-sm7lj\") pod \"observability-operator-59bdc8b94-5c445\" (UID: \"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9\") " pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.273183 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6sk\" (UniqueName: \"kubernetes.io/projected/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-kube-api-access-lt6sk\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.273297 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-openshift-service-ca\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.274327 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-openshift-service-ca\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.321785 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6sk\" (UniqueName: \"kubernetes.io/projected/b4d1f5a8-494c-4d68-ac75-0d7516cb7fca-kube-api-access-lt6sk\") pod \"perses-operator-5bf474d74f-99mbt\" (UID: \"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca\") " pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.335530 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.462508 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:22:53 crc kubenswrapper[4979]: I0130 23:22:53.915864 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d"] Jan 30 23:22:54 crc kubenswrapper[4979]: I0130 23:22:54.105264 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" event={"ID":"800342ba-21de-4a0e-849e-695bd71885b9","Type":"ContainerStarted","Data":"184cb0d3ac473e01471faeff93078f8fe818c9c83051279b4dfdeb8b9d1a5b27"} Jan 30 23:22:54 crc kubenswrapper[4979]: I0130 23:22:54.166543 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w"] Jan 30 23:22:54 crc kubenswrapper[4979]: W0130 23:22:54.180019 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0c76d26_1e50_4da5_8774_dde557bb1c50.slice/crio-73ca983b57fe2f98cb09cfad965de7fb7be6a42ec18ac3dfadc9effb1c66743a WatchSource:0}: Error finding container 73ca983b57fe2f98cb09cfad965de7fb7be6a42ec18ac3dfadc9effb1c66743a: Status 404 returned error can't find the container with id 73ca983b57fe2f98cb09cfad965de7fb7be6a42ec18ac3dfadc9effb1c66743a Jan 30 23:22:54 crc kubenswrapper[4979]: I0130 23:22:54.231119 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4"] Jan 30 23:22:54 crc kubenswrapper[4979]: I0130 23:22:54.246517 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-5c445"] Jan 30 23:22:54 crc kubenswrapper[4979]: W0130 23:22:54.249680 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc019a415_f4ef_48f7_a0ce_0ee2e2fc95f9.slice/crio-0780611b006171872ffce3d3f570b13b00c438b40fcda414bb76cc825ec9cef6 WatchSource:0}: Error finding container 0780611b006171872ffce3d3f570b13b00c438b40fcda414bb76cc825ec9cef6: Status 404 returned error can't find the container with id 0780611b006171872ffce3d3f570b13b00c438b40fcda414bb76cc825ec9cef6 Jan 30 23:22:54 crc kubenswrapper[4979]: I0130 23:22:54.297625 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-99mbt"] Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.063291 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8cb96"] Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.088648 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8cb96"] Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.157294 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" event={"ID":"be7dff91-b79d-4a99-a43b-9cc4a9894cda","Type":"ContainerStarted","Data":"2cce48b119f73338cd78cced7bb374b4cfc8f001b745ea226d8cd44cc19b39f7"} Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.162256 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" event={"ID":"a0c76d26-1e50-4da5-8774-dde557bb1c50","Type":"ContainerStarted","Data":"73ca983b57fe2f98cb09cfad965de7fb7be6a42ec18ac3dfadc9effb1c66743a"} Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.164244 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5c445" event={"ID":"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9","Type":"ContainerStarted","Data":"0780611b006171872ffce3d3f570b13b00c438b40fcda414bb76cc825ec9cef6"} Jan 30 23:22:55 crc kubenswrapper[4979]: I0130 23:22:55.165732 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" event={"ID":"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca","Type":"ContainerStarted","Data":"6d2f417dd431364dd49c5672c9d5eeb0b82a54d5beac17187fab398018064105"} Jan 30 23:22:57 crc kubenswrapper[4979]: I0130 23:22:57.092694 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b42fc3-64fe-40f2-9de5-b6f80489c601" path="/var/lib/kubelet/pods/b5b42fc3-64fe-40f2-9de5-b6f80489c601/volumes" Jan 30 23:23:02 crc kubenswrapper[4979]: I0130 23:23:02.039297 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:23:02 crc kubenswrapper[4979]: I0130 23:23:02.052555 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.385706 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" event={"ID":"800342ba-21de-4a0e-849e-695bd71885b9","Type":"ContainerStarted","Data":"e27d1464fb0f0f3f8a02a3293215a1649986ae0e91f5d79cb11142c8e2f2cb19"} Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.388278 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" event={"ID":"be7dff91-b79d-4a99-a43b-9cc4a9894cda","Type":"ContainerStarted","Data":"9e4e0d74d67c6933b367d4962cb1155e62a68d91f9f929084b356eb71d38260b"} Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.389958 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" event={"ID":"a0c76d26-1e50-4da5-8774-dde557bb1c50","Type":"ContainerStarted","Data":"c2accf9c6c3e4475e2b628c81f912c05b9e1fc444888309f0d2cdd86132df167"} Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.391668 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-5c445" event={"ID":"c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9","Type":"ContainerStarted","Data":"ab6c9339b72856d3c3fe06f9d836c29dd69fded65feec391e2c7829ba17f4943"} Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.391869 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.393379 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" event={"ID":"b4d1f5a8-494c-4d68-ac75-0d7516cb7fca","Type":"ContainerStarted","Data":"308e594958b0e32af33eb7c068568448825b90625a0d9a2736b82eb4c2f84662"} Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.393861 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.398145 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-5c445" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.417168 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d" podStartSLOduration=3.373339584 podStartE2EDuration="16.417149864s" podCreationTimestamp="2026-01-30 23:22:52 +0000 UTC" firstStartedPulling="2026-01-30 23:22:53.921024781 +0000 UTC m=+6169.882271844" lastFinishedPulling="2026-01-30 23:23:06.964835091 +0000 UTC m=+6182.926082124" observedRunningTime="2026-01-30 23:23:08.411812689 +0000 UTC m=+6184.373059722" watchObservedRunningTime="2026-01-30 23:23:08.417149864 +0000 UTC m=+6184.378396897" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.471920 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-5c445" podStartSLOduration=3.763435473 podStartE2EDuration="16.471897156s" podCreationTimestamp="2026-01-30 23:22:52 +0000 UTC" firstStartedPulling="2026-01-30 23:22:54.257217561 +0000 UTC m=+6170.218464594" lastFinishedPulling="2026-01-30 23:23:06.965679244 +0000 UTC m=+6182.926926277" observedRunningTime="2026-01-30 23:23:08.446676913 +0000 UTC m=+6184.407923956" watchObservedRunningTime="2026-01-30 23:23:08.471897156 +0000 UTC m=+6184.433144179" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.473935 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w" podStartSLOduration=3.737514503 podStartE2EDuration="16.473925291s" podCreationTimestamp="2026-01-30 23:22:52 +0000 UTC" firstStartedPulling="2026-01-30 23:22:54.182575551 +0000 UTC m=+6170.143822584" lastFinishedPulling="2026-01-30 23:23:06.918986329 +0000 UTC m=+6182.880233372" observedRunningTime="2026-01-30 23:23:08.470979952 +0000 UTC m=+6184.432226985" watchObservedRunningTime="2026-01-30 23:23:08.473925291 +0000 UTC m=+6184.435172324" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.511225 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-t8db4" podStartSLOduration=3.850919453 podStartE2EDuration="16.51120241s" podCreationTimestamp="2026-01-30 23:22:52 +0000 UTC" firstStartedPulling="2026-01-30 23:22:54.25786906 +0000 UTC m=+6170.219116093" lastFinishedPulling="2026-01-30 23:23:06.918152017 +0000 UTC m=+6182.879399050" observedRunningTime="2026-01-30 23:23:08.504023265 +0000 UTC m=+6184.465270298" watchObservedRunningTime="2026-01-30 23:23:08.51120241 +0000 UTC m=+6184.472449443" Jan 30 23:23:08 crc kubenswrapper[4979]: I0130 23:23:08.538797 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" podStartSLOduration=2.536009247 podStartE2EDuration="15.538779176s" podCreationTimestamp="2026-01-30 23:22:53 +0000 UTC" firstStartedPulling="2026-01-30 23:22:54.281506159 +0000 UTC m=+6170.242753192" lastFinishedPulling="2026-01-30 23:23:07.284276088 +0000 UTC m=+6183.245523121" observedRunningTime="2026-01-30 23:23:08.53260105 +0000 UTC m=+6184.493848103" watchObservedRunningTime="2026-01-30 23:23:08.538779176 +0000 UTC m=+6184.500026209" Jan 30 23:23:13 crc kubenswrapper[4979]: I0130 23:23:13.467419 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-99mbt" Jan 30 23:23:15 crc kubenswrapper[4979]: I0130 23:23:15.988902 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 23:23:15 crc kubenswrapper[4979]: I0130 23:23:15.989468 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" containerName="openstackclient" containerID="cri-o://9e23067542f31893bc50fa1bf6cce7ed4e9c501f08ce728f7f2d98af05d87464" gracePeriod=2 Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.005137 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.045442 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 23:23:16 crc kubenswrapper[4979]: E0130 23:23:16.045843 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" containerName="openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.045856 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" containerName="openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.046063 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" containerName="openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.046665 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.058521 4979 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" podUID="278b06cd-52af-4fce-b0e8-fd7f870b0564" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.076812 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.112213 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.112268 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpzvx\" (UniqueName: \"kubernetes.io/projected/278b06cd-52af-4fce-b0e8-fd7f870b0564-kube-api-access-mpzvx\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.112315 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config-secret\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.214230 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.214566 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpzvx\" (UniqueName: \"kubernetes.io/projected/278b06cd-52af-4fce-b0e8-fd7f870b0564-kube-api-access-mpzvx\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.214617 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config-secret\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.215604 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.222860 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/278b06cd-52af-4fce-b0e8-fd7f870b0564-openstack-config-secret\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.248638 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpzvx\" (UniqueName: \"kubernetes.io/projected/278b06cd-52af-4fce-b0e8-fd7f870b0564-kube-api-access-mpzvx\") pod \"openstackclient\" (UID: \"278b06cd-52af-4fce-b0e8-fd7f870b0564\") " pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.287532 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.288998 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.292791 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-d97wm" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.312046 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.382665 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.417173 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5f2l\" (UniqueName: \"kubernetes.io/projected/0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7-kube-api-access-j5f2l\") pod \"kube-state-metrics-0\" (UID: \"0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7\") " pod="openstack/kube-state-metrics-0" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.519185 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5f2l\" (UniqueName: \"kubernetes.io/projected/0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7-kube-api-access-j5f2l\") pod \"kube-state-metrics-0\" (UID: \"0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7\") " pod="openstack/kube-state-metrics-0" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.552765 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5f2l\" (UniqueName: \"kubernetes.io/projected/0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7-kube-api-access-j5f2l\") pod \"kube-state-metrics-0\" (UID: \"0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7\") " pod="openstack/kube-state-metrics-0" Jan 30 23:23:16 crc kubenswrapper[4979]: I0130 23:23:16.638602 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.109751 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.115914 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.121470 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.121503 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-ftqfx" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.121650 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.121729 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.122231 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.166449 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.240709 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.240753 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.240778 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.240814 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.240954 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.241025 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.241155 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8km7m\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-kube-api-access-8km7m\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.342998 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343075 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343104 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343143 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343181 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343200 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.343235 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8km7m\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-kube-api-access-8km7m\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.352485 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.361275 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.361936 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.375579 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/accadf60-186b-408a-94cb-aae9319d58e9-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.421381 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/accadf60-186b-408a-94cb-aae9319d58e9-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.422515 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.422531 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8km7m\" (UniqueName: \"kubernetes.io/projected/accadf60-186b-408a-94cb-aae9319d58e9-kube-api-access-8km7m\") pod \"alertmanager-metric-storage-0\" (UID: \"accadf60-186b-408a-94cb-aae9319d58e9\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.480592 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.559610 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.599548 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.764802 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.788718 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.816001 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.817968 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.818564 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.818709 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.818847 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.818953 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.819351 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.819578 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2q5fd" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.828485 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.980611 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.980834 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.980892 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.980936 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.980962 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.981096 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9vg\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-kube-api-access-mh9vg\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.981133 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.981160 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.981184 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:17 crc kubenswrapper[4979]: I0130 23:23:17.981213 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0f8756ad-bff0-4f0d-9444-cbba47490d33-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.083714 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.083972 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.083996 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084058 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0f8756ad-bff0-4f0d-9444-cbba47490d33-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084105 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084127 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084195 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084236 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084279 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.084351 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9vg\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-kube-api-access-mh9vg\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.085590 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.095025 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0f8756ad-bff0-4f0d-9444-cbba47490d33-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.091208 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.101391 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.102966 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.103641 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.110466 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f8756ad-bff0-4f0d-9444-cbba47490d33-config\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.111741 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0f8756ad-bff0-4f0d-9444-cbba47490d33-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.131044 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9vg\" (UniqueName: \"kubernetes.io/projected/0f8756ad-bff0-4f0d-9444-cbba47490d33-kube-api-access-mh9vg\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.133731 4979 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.133754 4979 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3a23f2fd2ff538bdecbc4c91851ecb9066558d51c0e562ef2c378e987d2b8cc0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.300641 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-305cd8e7-c90b-4ddf-a63c-c42fb1bb544f\") pod \"prometheus-metric-storage-0\" (UID: \"0f8756ad-bff0-4f0d-9444-cbba47490d33\") " pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.391540 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.597482 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.604547 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"278b06cd-52af-4fce-b0e8-fd7f870b0564","Type":"ContainerStarted","Data":"d240dff7ace3f094e8522777acd66aac8012edbef9a5c5c5c3d31b11353c6bd5"} Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.604588 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"278b06cd-52af-4fce-b0e8-fd7f870b0564","Type":"ContainerStarted","Data":"5892838ac5cf221a39b5e98f551dff5599da70f484db1eaefba3d9881280020d"} Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.607977 4979 generic.go:334] "Generic (PLEG): container finished" podID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" containerID="9e23067542f31893bc50fa1bf6cce7ed4e9c501f08ce728f7f2d98af05d87464" exitCode=137 Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.608135 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b35f115458eae51c09b989e0ed88002066967bab95f54ae46481d8d55d31f85" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.629369 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7","Type":"ContainerStarted","Data":"aca4677357894a4aac4be38157885c66bf3a7a826c621218a8f7a9cab703a701"} Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.649593 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.667489 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.667465355 podStartE2EDuration="2.667465355s" podCreationTimestamp="2026-01-30 23:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:23:18.637748091 +0000 UTC m=+6194.598995124" watchObservedRunningTime="2026-01-30 23:23:18.667465355 +0000 UTC m=+6194.628712388" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.721778 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k49pn\" (UniqueName: \"kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn\") pod \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.721824 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret\") pod \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.722008 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config\") pod \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\" (UID: \"cac6c8d9-2bae-45c7-9a7d-ca70c121f82e\") " Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.757264 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn" (OuterVolumeSpecName: "kube-api-access-k49pn") pod "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" (UID: "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e"). InnerVolumeSpecName "kube-api-access-k49pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.757671 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" (UID: "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.815907 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" (UID: "cac6c8d9-2bae-45c7-9a7d-ca70c121f82e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.824986 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k49pn\" (UniqueName: \"kubernetes.io/projected/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-kube-api-access-k49pn\") on node \"crc\" DevicePath \"\"" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.825088 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 23:23:18 crc kubenswrapper[4979]: I0130 23:23:18.825100 4979 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.081642 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac6c8d9-2bae-45c7-9a7d-ca70c121f82e" path="/var/lib/kubelet/pods/cac6c8d9-2bae-45c7-9a7d-ca70c121f82e/volumes" Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.102997 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 23:23:19 crc kubenswrapper[4979]: W0130 23:23:19.112913 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f8756ad_bff0_4f0d_9444_cbba47490d33.slice/crio-bdec7f30773ca90cc447b939b0e6dff91966719d0a8a0d07c2b54696415a4518 WatchSource:0}: Error finding container bdec7f30773ca90cc447b939b0e6dff91966719d0a8a0d07c2b54696415a4518: Status 404 returned error can't find the container with id bdec7f30773ca90cc447b939b0e6dff91966719d0a8a0d07c2b54696415a4518 Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.640085 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7","Type":"ContainerStarted","Data":"6173839959b57672d2890be304db629e4b5e96537800946204cf9c170f14c076"} Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.640169 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.641550 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"accadf60-186b-408a-94cb-aae9319d58e9","Type":"ContainerStarted","Data":"09db8213f0273319e87bfc501570cb8c81af85e0a51bbd2c262a5c8da7726a77"} Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.642700 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerStarted","Data":"bdec7f30773ca90cc447b939b0e6dff91966719d0a8a0d07c2b54696415a4518"} Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.642732 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 23:23:19 crc kubenswrapper[4979]: I0130 23:23:19.657747 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.120085417 podStartE2EDuration="3.657725301s" podCreationTimestamp="2026-01-30 23:23:16 +0000 UTC" firstStartedPulling="2026-01-30 23:23:17.952774989 +0000 UTC m=+6193.914022022" lastFinishedPulling="2026-01-30 23:23:18.490414883 +0000 UTC m=+6194.451661906" observedRunningTime="2026-01-30 23:23:19.654587686 +0000 UTC m=+6195.615834739" watchObservedRunningTime="2026-01-30 23:23:19.657725301 +0000 UTC m=+6195.618972334" Jan 30 23:23:25 crc kubenswrapper[4979]: I0130 23:23:25.709419 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"accadf60-186b-408a-94cb-aae9319d58e9","Type":"ContainerStarted","Data":"33c4b6ff96bc3f1b90e284163c809c2e1e944235ffd2f36009d4065dac7cfeb1"} Jan 30 23:23:25 crc kubenswrapper[4979]: I0130 23:23:25.711548 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerStarted","Data":"96aa20f913f179c6cd530189409e8a61fbd3c5178ad76ce5cbe5a45d74822b32"} Jan 30 23:23:26 crc kubenswrapper[4979]: I0130 23:23:26.643907 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 23:23:29 crc kubenswrapper[4979]: I0130 23:23:29.404615 4979 scope.go:117] "RemoveContainer" containerID="1ad4342510dcd831bcc75d1de4109d08c8cf80f260002f23328c1e9c71c6966a" Jan 30 23:23:29 crc kubenswrapper[4979]: I0130 23:23:29.433007 4979 scope.go:117] "RemoveContainer" containerID="a62465cb392e615a1f73cdd50e7e273cdf6ffb4563f5d71cdc8e1d86d9a79520" Jan 30 23:23:29 crc kubenswrapper[4979]: I0130 23:23:29.488685 4979 scope.go:117] "RemoveContainer" containerID="9e23067542f31893bc50fa1bf6cce7ed4e9c501f08ce728f7f2d98af05d87464" Jan 30 23:23:29 crc kubenswrapper[4979]: I0130 23:23:29.547513 4979 scope.go:117] "RemoveContainer" containerID="e78c967f90d787e6a500755dd51462d00698c1a63f9294556b2308f1758c7a1f" Jan 30 23:23:31 crc kubenswrapper[4979]: I0130 23:23:31.784181 4979 generic.go:334] "Generic (PLEG): container finished" podID="0f8756ad-bff0-4f0d-9444-cbba47490d33" containerID="96aa20f913f179c6cd530189409e8a61fbd3c5178ad76ce5cbe5a45d74822b32" exitCode=0 Jan 30 23:23:31 crc kubenswrapper[4979]: I0130 23:23:31.784505 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerDied","Data":"96aa20f913f179c6cd530189409e8a61fbd3c5178ad76ce5cbe5a45d74822b32"} Jan 30 23:23:31 crc kubenswrapper[4979]: I0130 23:23:31.790613 4979 generic.go:334] "Generic (PLEG): container finished" podID="accadf60-186b-408a-94cb-aae9319d58e9" containerID="33c4b6ff96bc3f1b90e284163c809c2e1e944235ffd2f36009d4065dac7cfeb1" exitCode=0 Jan 30 23:23:31 crc kubenswrapper[4979]: I0130 23:23:31.790676 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"accadf60-186b-408a-94cb-aae9319d58e9","Type":"ContainerDied","Data":"33c4b6ff96bc3f1b90e284163c809c2e1e944235ffd2f36009d4065dac7cfeb1"} Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.039653 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.040014 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.040090 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.041407 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.041468 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83" gracePeriod=600 Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.802263 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83" exitCode=0 Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.802306 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83"} Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.802597 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9"} Jan 30 23:23:32 crc kubenswrapper[4979]: I0130 23:23:32.802621 4979 scope.go:117] "RemoveContainer" containerID="bf41b9c6763981d34be429117086c1d503d9b0d1021645bfe296c0a99b90f39f" Jan 30 23:23:35 crc kubenswrapper[4979]: I0130 23:23:35.851327 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"accadf60-186b-408a-94cb-aae9319d58e9","Type":"ContainerStarted","Data":"513a5b81f34ba40b20d5fcc882ca382de5ecfa8fbe84c38d5996392e2cfb2bac"} Jan 30 23:23:38 crc kubenswrapper[4979]: I0130 23:23:38.882385 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"accadf60-186b-408a-94cb-aae9319d58e9","Type":"ContainerStarted","Data":"da65aada1ec72cd982291d640db937683e25de5a43b382b86e1471d87e0e99f4"} Jan 30 23:23:38 crc kubenswrapper[4979]: I0130 23:23:38.883802 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:38 crc kubenswrapper[4979]: I0130 23:23:38.886326 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 30 23:23:38 crc kubenswrapper[4979]: I0130 23:23:38.886545 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerStarted","Data":"ddceb57f75765315231851d86a0c073f73d7d3cae609276a6315a5f5bb73a71c"} Jan 30 23:23:38 crc kubenswrapper[4979]: I0130 23:23:38.908802 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.544791042 podStartE2EDuration="21.908781418s" podCreationTimestamp="2026-01-30 23:23:17 +0000 UTC" firstStartedPulling="2026-01-30 23:23:18.649403196 +0000 UTC m=+6194.610650229" lastFinishedPulling="2026-01-30 23:23:35.013393572 +0000 UTC m=+6210.974640605" observedRunningTime="2026-01-30 23:23:38.906435194 +0000 UTC m=+6214.867682227" watchObservedRunningTime="2026-01-30 23:23:38.908781418 +0000 UTC m=+6214.870028471" Jan 30 23:23:41 crc kubenswrapper[4979]: I0130 23:23:41.914679 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerStarted","Data":"37251a169656347800b4f36930cf604f42b2fa45769ca9875e7c6e7b7255aa5d"} Jan 30 23:23:44 crc kubenswrapper[4979]: I0130 23:23:44.948716 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0f8756ad-bff0-4f0d-9444-cbba47490d33","Type":"ContainerStarted","Data":"7728e5c4dc7ca32a64f3490f16a2204453ccc0c0062d17886857913717d65413"} Jan 30 23:23:44 crc kubenswrapper[4979]: I0130 23:23:44.976531 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.483412473 podStartE2EDuration="28.976510988s" podCreationTimestamp="2026-01-30 23:23:16 +0000 UTC" firstStartedPulling="2026-01-30 23:23:19.116427979 +0000 UTC m=+6195.077675012" lastFinishedPulling="2026-01-30 23:23:44.609526494 +0000 UTC m=+6220.570773527" observedRunningTime="2026-01-30 23:23:44.96840707 +0000 UTC m=+6220.929654103" watchObservedRunningTime="2026-01-30 23:23:44.976510988 +0000 UTC m=+6220.937758021" Jan 30 23:23:48 crc kubenswrapper[4979]: I0130 23:23:48.392957 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:48 crc kubenswrapper[4979]: I0130 23:23:48.393513 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:48 crc kubenswrapper[4979]: I0130 23:23:48.395091 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:48 crc kubenswrapper[4979]: I0130 23:23:48.992988 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.618347 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.621101 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.623722 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.623726 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.642284 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.774371 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-scripts\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.774521 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-run-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.774568 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-log-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.774617 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.774741 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlz8f\" (UniqueName: \"kubernetes.io/projected/05655350-25f6-4610-9ec7-f492b4691d5d-kube-api-access-hlz8f\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.775080 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.775152 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-config-data\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.876860 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-run-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.876924 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-log-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.876955 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.877119 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlz8f\" (UniqueName: \"kubernetes.io/projected/05655350-25f6-4610-9ec7-f492b4691d5d-kube-api-access-hlz8f\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.877171 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.877191 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-config-data\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.877225 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-scripts\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.877477 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-run-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.878026 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05655350-25f6-4610-9ec7-f492b4691d5d-log-httpd\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.883612 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.884920 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-config-data\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.885210 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-scripts\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.889235 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05655350-25f6-4610-9ec7-f492b4691d5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.900276 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlz8f\" (UniqueName: \"kubernetes.io/projected/05655350-25f6-4610-9ec7-f492b4691d5d-kube-api-access-hlz8f\") pod \"ceilometer-0\" (UID: \"05655350-25f6-4610-9ec7-f492b4691d5d\") " pod="openstack/ceilometer-0" Jan 30 23:23:49 crc kubenswrapper[4979]: I0130 23:23:49.940324 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 23:23:50 crc kubenswrapper[4979]: W0130 23:23:50.569580 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05655350_25f6_4610_9ec7_f492b4691d5d.slice/crio-86d96307eddc079152844148af3ace002a08967739f0810b3cc39588cebd3fc7 WatchSource:0}: Error finding container 86d96307eddc079152844148af3ace002a08967739f0810b3cc39588cebd3fc7: Status 404 returned error can't find the container with id 86d96307eddc079152844148af3ace002a08967739f0810b3cc39588cebd3fc7 Jan 30 23:23:50 crc kubenswrapper[4979]: I0130 23:23:50.580520 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 23:23:51 crc kubenswrapper[4979]: I0130 23:23:51.029168 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05655350-25f6-4610-9ec7-f492b4691d5d","Type":"ContainerStarted","Data":"86d96307eddc079152844148af3ace002a08967739f0810b3cc39588cebd3fc7"} Jan 30 23:23:52 crc kubenswrapper[4979]: I0130 23:23:52.055744 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05655350-25f6-4610-9ec7-f492b4691d5d","Type":"ContainerStarted","Data":"f3d07368e153a5c55121bdd6fa9cc11231a2103541178316393cdf3221172cf8"} Jan 30 23:23:52 crc kubenswrapper[4979]: I0130 23:23:52.056134 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05655350-25f6-4610-9ec7-f492b4691d5d","Type":"ContainerStarted","Data":"f3670ae1c4492b55077f3979bab9afa417fb637d5d218f4e539df44981ac2af9"} Jan 30 23:23:53 crc kubenswrapper[4979]: I0130 23:23:53.085919 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05655350-25f6-4610-9ec7-f492b4691d5d","Type":"ContainerStarted","Data":"adc05dfc2ac8ea72d88dd757977ad961c2c97ffd1a811db9c6cb5c5ec18cd573"} Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.068462 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fdfp6"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.092627 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9d4a-account-create-update-t4fvj"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.094282 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-nj2pr"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.111097 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-cxszb"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.132412 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9d4a-account-create-update-t4fvj"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.152861 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fdfp6"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.162171 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-cxszb"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.196619 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-nj2pr"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.208363 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-eff7-account-create-update-zbvkl"] Jan 30 23:23:55 crc kubenswrapper[4979]: I0130 23:23:55.213802 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-eff7-account-create-update-zbvkl"] Jan 30 23:23:56 crc kubenswrapper[4979]: I0130 23:23:56.051235 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0ab0-account-create-update-wcw27"] Jan 30 23:23:56 crc kubenswrapper[4979]: I0130 23:23:56.063509 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0ab0-account-create-update-wcw27"] Jan 30 23:23:56 crc kubenswrapper[4979]: I0130 23:23:56.132373 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05655350-25f6-4610-9ec7-f492b4691d5d","Type":"ContainerStarted","Data":"2654e5763661d4a18b2c68a556c110af31a979faaa8d34ded4ff755047de163d"} Jan 30 23:23:56 crc kubenswrapper[4979]: I0130 23:23:56.134130 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.083893 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0010c53f-b0a4-44bd-9178-bbd2941973ff" path="/var/lib/kubelet/pods/0010c53f-b0a4-44bd-9178-bbd2941973ff/volumes" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.084860 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09fb7fe9-97f7-4af9-897c-e4fb6f234c79" path="/var/lib/kubelet/pods/09fb7fe9-97f7-4af9-897c-e4fb6f234c79/volumes" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.085458 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a160f3-ed61-481d-be84-cdc6c7b6097a" path="/var/lib/kubelet/pods/20a160f3-ed61-481d-be84-cdc6c7b6097a/volumes" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.086104 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28312ce4-d376-4d84-9aea-175ee095e2ce" path="/var/lib/kubelet/pods/28312ce4-d376-4d84-9aea-175ee095e2ce/volumes" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.087159 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f726869-e2f9-4a3b-b40a-236ad3a8566c" path="/var/lib/kubelet/pods/8f726869-e2f9-4a3b-b40a-236ad3a8566c/volumes" Jan 30 23:23:57 crc kubenswrapper[4979]: I0130 23:23:57.088372 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd" path="/var/lib/kubelet/pods/c77ed1c1-c667-4f7b-b39f-b9ce3fab8bfd/volumes" Jan 30 23:24:04 crc kubenswrapper[4979]: I0130 23:24:04.046022 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=10.45334335 podStartE2EDuration="15.045997341s" podCreationTimestamp="2026-01-30 23:23:49 +0000 UTC" firstStartedPulling="2026-01-30 23:23:50.572087718 +0000 UTC m=+6226.533334751" lastFinishedPulling="2026-01-30 23:23:55.164741709 +0000 UTC m=+6231.125988742" observedRunningTime="2026-01-30 23:23:56.160841903 +0000 UTC m=+6232.122088946" watchObservedRunningTime="2026-01-30 23:24:04.045997341 +0000 UTC m=+6240.007244384" Jan 30 23:24:04 crc kubenswrapper[4979]: I0130 23:24:04.049389 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbzxc"] Jan 30 23:24:04 crc kubenswrapper[4979]: I0130 23:24:04.064198 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbzxc"] Jan 30 23:24:05 crc kubenswrapper[4979]: I0130 23:24:05.097009 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498ed84d-af03-4ccb-bc46-3d1f8ca8861a" path="/var/lib/kubelet/pods/498ed84d-af03-4ccb-bc46-3d1f8ca8861a/volumes" Jan 30 23:24:19 crc kubenswrapper[4979]: I0130 23:24:19.953084 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 23:24:23 crc kubenswrapper[4979]: I0130 23:24:23.040255 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxsh"] Jan 30 23:24:23 crc kubenswrapper[4979]: I0130 23:24:23.052267 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxsh"] Jan 30 23:24:23 crc kubenswrapper[4979]: I0130 23:24:23.084292 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e541a45b-949e-42d3-bbbd-b7fcf76ae045" path="/var/lib/kubelet/pods/e541a45b-949e-42d3-bbbd-b7fcf76ae045/volumes" Jan 30 23:24:24 crc kubenswrapper[4979]: I0130 23:24:24.035264 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggn6b"] Jan 30 23:24:24 crc kubenswrapper[4979]: I0130 23:24:24.046318 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggn6b"] Jan 30 23:24:25 crc kubenswrapper[4979]: I0130 23:24:25.092624 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39641496-4ab5-48e9-98bf-5627a0a79411" path="/var/lib/kubelet/pods/39641496-4ab5-48e9-98bf-5627a0a79411/volumes" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.695386 4979 scope.go:117] "RemoveContainer" containerID="c97facf775c73b551ef6f9048bed47738d4278893d70fd1c9740e75be9b3292e" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.761240 4979 scope.go:117] "RemoveContainer" containerID="dcc8eb2dc0a607435ecf93ba244414771c7370f6f382f6c64913f281ce050673" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.793345 4979 scope.go:117] "RemoveContainer" containerID="3d49f76579ebce159dde4f7f8e10b1d7dd782ed39ac26b0b2a652ca85113974a" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.841292 4979 scope.go:117] "RemoveContainer" containerID="27840084beb4ba874ff13079199d29959179ee34197c63b9cb25f8f1f6190475" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.886060 4979 scope.go:117] "RemoveContainer" containerID="27746524c4c68ca5b766ef144aa2b7cd8bd00780eefec84e45e51a6c155cf253" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.929215 4979 scope.go:117] "RemoveContainer" containerID="f723b534008a3a9bab8f334c93b4004586730fb78452228a2418e3f55070a126" Jan 30 23:24:29 crc kubenswrapper[4979]: I0130 23:24:29.986173 4979 scope.go:117] "RemoveContainer" containerID="d6036102c9a9e4c432b5f565faa7d7dd06e4a0ac83ea3d325a705b0f27afa0af" Jan 30 23:24:30 crc kubenswrapper[4979]: I0130 23:24:30.012335 4979 scope.go:117] "RemoveContainer" containerID="658a0275a71d4694f41d9631d5946d0fa7658e2fdfd136878a24bb61565abcdf" Jan 30 23:24:30 crc kubenswrapper[4979]: I0130 23:24:30.042798 4979 scope.go:117] "RemoveContainer" containerID="95d8644ba79bb1a7acb56c2741c41279f8988b80e0d6356e0c3aa672c820a8cd" Jan 30 23:24:37 crc kubenswrapper[4979]: I0130 23:24:37.054071 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jzkql"] Jan 30 23:24:37 crc kubenswrapper[4979]: I0130 23:24:37.064558 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jzkql"] Jan 30 23:24:37 crc kubenswrapper[4979]: I0130 23:24:37.084000 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c7f950-be1a-4557-8548-d41ac49e8010" path="/var/lib/kubelet/pods/a0c7f950-be1a-4557-8548-d41ac49e8010/volumes" Jan 30 23:25:21 crc kubenswrapper[4979]: I0130 23:25:21.044683 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7719-account-create-update-h5jpn"] Jan 30 23:25:21 crc kubenswrapper[4979]: I0130 23:25:21.054496 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dfcbh"] Jan 30 23:25:21 crc kubenswrapper[4979]: I0130 23:25:21.064527 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7719-account-create-update-h5jpn"] Jan 30 23:25:21 crc kubenswrapper[4979]: I0130 23:25:21.083206 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9737fb48-932e-4216-a323-0fa11a0a136d" path="/var/lib/kubelet/pods/9737fb48-932e-4216-a323-0fa11a0a136d/volumes" Jan 30 23:25:21 crc kubenswrapper[4979]: I0130 23:25:21.083920 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dfcbh"] Jan 30 23:25:23 crc kubenswrapper[4979]: I0130 23:25:23.081682 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b7f12a-3ae2-43d3-83d8-ea5ac1439aed" path="/var/lib/kubelet/pods/87b7f12a-3ae2-43d3-83d8-ea5ac1439aed/volumes" Jan 30 23:25:29 crc kubenswrapper[4979]: I0130 23:25:29.045632 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-x8rfx"] Jan 30 23:25:29 crc kubenswrapper[4979]: I0130 23:25:29.054355 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-x8rfx"] Jan 30 23:25:29 crc kubenswrapper[4979]: I0130 23:25:29.081925 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f36c73f1-9737-467c-a014-5ac45eb3f512" path="/var/lib/kubelet/pods/f36c73f1-9737-467c-a014-5ac45eb3f512/volumes" Jan 30 23:25:30 crc kubenswrapper[4979]: I0130 23:25:30.234475 4979 scope.go:117] "RemoveContainer" containerID="2e5921219826ad4f6046a051d3c3a9bd5014518b8ece445c4e2400e7ac7d238a" Jan 30 23:25:30 crc kubenswrapper[4979]: I0130 23:25:30.273071 4979 scope.go:117] "RemoveContainer" containerID="d7d84d9b6f642570ec9f0833c3f37b449071bcc3ab74fb1efbfc67cb25be27a7" Jan 30 23:25:30 crc kubenswrapper[4979]: I0130 23:25:30.309735 4979 scope.go:117] "RemoveContainer" containerID="b2aed671841955c62444becfeabff7ccb5bcd0fdccfa5d1f4e24c893f848c58c" Jan 30 23:25:30 crc kubenswrapper[4979]: I0130 23:25:30.354342 4979 scope.go:117] "RemoveContainer" containerID="41cdb0291361e0a8365a54c79470d747f6d9eb9bfc7ccb69ab8969a4d5853007" Jan 30 23:25:30 crc kubenswrapper[4979]: I0130 23:25:30.415319 4979 scope.go:117] "RemoveContainer" containerID="33793d66c62b82fadedf876d0612a42979bc1f8ad6fccd554e52bcadc661b6fd" Jan 30 23:25:32 crc kubenswrapper[4979]: I0130 23:25:32.039433 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:25:32 crc kubenswrapper[4979]: I0130 23:25:32.039701 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:26:02 crc kubenswrapper[4979]: I0130 23:26:02.039938 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:26:02 crc kubenswrapper[4979]: I0130 23:26:02.040461 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.039898 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.042002 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.042274 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.043547 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.043754 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" gracePeriod=600 Jan 30 23:26:32 crc kubenswrapper[4979]: E0130 23:26:32.168258 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.914509 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" exitCode=0 Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.914625 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9"} Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.915090 4979 scope.go:117] "RemoveContainer" containerID="ce256d253558eef5d462b6fe6f69e6a85674086fe60d9ac7764d0a93afda9e83" Jan 30 23:26:32 crc kubenswrapper[4979]: I0130 23:26:32.916107 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:26:32 crc kubenswrapper[4979]: E0130 23:26:32.916836 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:26:47 crc kubenswrapper[4979]: I0130 23:26:47.081443 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:26:47 crc kubenswrapper[4979]: E0130 23:26:47.083587 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:27:01 crc kubenswrapper[4979]: I0130 23:27:01.069766 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:27:01 crc kubenswrapper[4979]: E0130 23:27:01.070492 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:27:13 crc kubenswrapper[4979]: I0130 23:27:13.070544 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:27:13 crc kubenswrapper[4979]: E0130 23:27:13.071471 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:27:27 crc kubenswrapper[4979]: I0130 23:27:27.070754 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:27:27 crc kubenswrapper[4979]: E0130 23:27:27.071555 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:27:40 crc kubenswrapper[4979]: I0130 23:27:40.070427 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:27:40 crc kubenswrapper[4979]: E0130 23:27:40.071241 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:27:51 crc kubenswrapper[4979]: I0130 23:27:51.070299 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:27:51 crc kubenswrapper[4979]: E0130 23:27:51.071503 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:04 crc kubenswrapper[4979]: I0130 23:28:04.070143 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:28:04 crc kubenswrapper[4979]: E0130 23:28:04.071090 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:07 crc kubenswrapper[4979]: I0130 23:28:07.058606 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-mp8qq"] Jan 30 23:28:07 crc kubenswrapper[4979]: I0130 23:28:07.097655 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-mp8qq"] Jan 30 23:28:09 crc kubenswrapper[4979]: I0130 23:28:09.041735 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-22c0-account-create-update-pwzqj"] Jan 30 23:28:09 crc kubenswrapper[4979]: I0130 23:28:09.056797 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-22c0-account-create-update-pwzqj"] Jan 30 23:28:09 crc kubenswrapper[4979]: I0130 23:28:09.085434 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa0fc85-dd34-469d-a6b4-500d9e17e8cd" path="/var/lib/kubelet/pods/3fa0fc85-dd34-469d-a6b4-500d9e17e8cd/volumes" Jan 30 23:28:09 crc kubenswrapper[4979]: I0130 23:28:09.085992 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad393e9-51ee-4f44-976c-fb9c28487d67" path="/var/lib/kubelet/pods/cad393e9-51ee-4f44-976c-fb9c28487d67/volumes" Jan 30 23:28:15 crc kubenswrapper[4979]: I0130 23:28:15.049206 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-vn66f"] Jan 30 23:28:15 crc kubenswrapper[4979]: I0130 23:28:15.063805 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-vn66f"] Jan 30 23:28:15 crc kubenswrapper[4979]: I0130 23:28:15.084161 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d8f6093-1ce3-4cb4-829a-71a3aaded46f" path="/var/lib/kubelet/pods/5d8f6093-1ce3-4cb4-829a-71a3aaded46f/volumes" Jan 30 23:28:16 crc kubenswrapper[4979]: I0130 23:28:16.038881 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-ff98-account-create-update-szcww"] Jan 30 23:28:16 crc kubenswrapper[4979]: I0130 23:28:16.052466 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-ff98-account-create-update-szcww"] Jan 30 23:28:17 crc kubenswrapper[4979]: I0130 23:28:17.069844 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:28:17 crc kubenswrapper[4979]: E0130 23:28:17.070343 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:17 crc kubenswrapper[4979]: I0130 23:28:17.081611 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9549a4c7-2fb8-4f18-a7d3-902949e90d8c" path="/var/lib/kubelet/pods/9549a4c7-2fb8-4f18-a7d3-902949e90d8c/volumes" Jan 30 23:28:28 crc kubenswrapper[4979]: I0130 23:28:28.069575 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:28:28 crc kubenswrapper[4979]: E0130 23:28:28.070644 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:30 crc kubenswrapper[4979]: I0130 23:28:30.606388 4979 scope.go:117] "RemoveContainer" containerID="4585a42ea864cc4af87b4f754b0c7b9540e84f1af59fb62e004a04f42ca82ee5" Jan 30 23:28:30 crc kubenswrapper[4979]: I0130 23:28:30.634335 4979 scope.go:117] "RemoveContainer" containerID="295a318396efe901097828f5812c2e83c8a8ea83df8ad7b1b542f03c853244c2" Jan 30 23:28:30 crc kubenswrapper[4979]: I0130 23:28:30.707166 4979 scope.go:117] "RemoveContainer" containerID="2901952f949f2b6e5bf0bdfc295d7dcb142b237e525207eca8287fadd9dc45a0" Jan 30 23:28:30 crc kubenswrapper[4979]: I0130 23:28:30.768720 4979 scope.go:117] "RemoveContainer" containerID="ab9d6fd9b6c78c1609831430497301a395dbc97dc2a1cc5b8ce36db173127e64" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.403607 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.409745 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.414595 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.582521 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbkcx\" (UniqueName: \"kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.583668 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.583813 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.686380 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.686435 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.686546 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbkcx\" (UniqueName: \"kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.687185 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.687309 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.718988 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbkcx\" (UniqueName: \"kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx\") pod \"community-operators-d6jhp\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:37 crc kubenswrapper[4979]: I0130 23:28:37.748286 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:38 crc kubenswrapper[4979]: I0130 23:28:38.427005 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:39 crc kubenswrapper[4979]: I0130 23:28:39.355832 4979 generic.go:334] "Generic (PLEG): container finished" podID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerID="857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707" exitCode=0 Jan 30 23:28:39 crc kubenswrapper[4979]: I0130 23:28:39.355938 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerDied","Data":"857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707"} Jan 30 23:28:39 crc kubenswrapper[4979]: I0130 23:28:39.356341 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerStarted","Data":"3ec55b0e60daf8ca977cc638e8e02fdff79be6b2603eb72f1beeb983901fb590"} Jan 30 23:28:39 crc kubenswrapper[4979]: I0130 23:28:39.360609 4979 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 23:28:40 crc kubenswrapper[4979]: I0130 23:28:40.072530 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:28:40 crc kubenswrapper[4979]: E0130 23:28:40.073804 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:40 crc kubenswrapper[4979]: I0130 23:28:40.371779 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerStarted","Data":"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165"} Jan 30 23:28:42 crc kubenswrapper[4979]: I0130 23:28:42.403135 4979 generic.go:334] "Generic (PLEG): container finished" podID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerID="efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165" exitCode=0 Jan 30 23:28:42 crc kubenswrapper[4979]: I0130 23:28:42.403243 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerDied","Data":"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165"} Jan 30 23:28:43 crc kubenswrapper[4979]: I0130 23:28:43.415689 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerStarted","Data":"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8"} Jan 30 23:28:43 crc kubenswrapper[4979]: I0130 23:28:43.433964 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d6jhp" podStartSLOduration=2.948287778 podStartE2EDuration="6.433940902s" podCreationTimestamp="2026-01-30 23:28:37 +0000 UTC" firstStartedPulling="2026-01-30 23:28:39.360189968 +0000 UTC m=+6515.321437041" lastFinishedPulling="2026-01-30 23:28:42.845843132 +0000 UTC m=+6518.807090165" observedRunningTime="2026-01-30 23:28:43.431524527 +0000 UTC m=+6519.392771560" watchObservedRunningTime="2026-01-30 23:28:43.433940902 +0000 UTC m=+6519.395187945" Jan 30 23:28:47 crc kubenswrapper[4979]: I0130 23:28:47.748738 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:47 crc kubenswrapper[4979]: I0130 23:28:47.749403 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:47 crc kubenswrapper[4979]: I0130 23:28:47.821156 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:48 crc kubenswrapper[4979]: I0130 23:28:48.524260 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:48 crc kubenswrapper[4979]: I0130 23:28:48.586431 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:50 crc kubenswrapper[4979]: I0130 23:28:50.490536 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d6jhp" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="registry-server" containerID="cri-o://5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8" gracePeriod=2 Jan 30 23:28:50 crc kubenswrapper[4979]: I0130 23:28:50.970713 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.014153 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content\") pod \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.014306 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities\") pod \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.014580 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbkcx\" (UniqueName: \"kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx\") pod \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\" (UID: \"c9eb63e6-7657-4db8-90c6-f18b77fb3adc\") " Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.017798 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities" (OuterVolumeSpecName: "utilities") pod "c9eb63e6-7657-4db8-90c6-f18b77fb3adc" (UID: "c9eb63e6-7657-4db8-90c6-f18b77fb3adc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.024516 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx" (OuterVolumeSpecName: "kube-api-access-sbkcx") pod "c9eb63e6-7657-4db8-90c6-f18b77fb3adc" (UID: "c9eb63e6-7657-4db8-90c6-f18b77fb3adc"). InnerVolumeSpecName "kube-api-access-sbkcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.084196 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9eb63e6-7657-4db8-90c6-f18b77fb3adc" (UID: "c9eb63e6-7657-4db8-90c6-f18b77fb3adc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.117807 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbkcx\" (UniqueName: \"kubernetes.io/projected/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-kube-api-access-sbkcx\") on node \"crc\" DevicePath \"\"" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.117852 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.117865 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9eb63e6-7657-4db8-90c6-f18b77fb3adc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.504462 4979 generic.go:334] "Generic (PLEG): container finished" podID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerID="5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8" exitCode=0 Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.504512 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerDied","Data":"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8"} Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.504591 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6jhp" event={"ID":"c9eb63e6-7657-4db8-90c6-f18b77fb3adc","Type":"ContainerDied","Data":"3ec55b0e60daf8ca977cc638e8e02fdff79be6b2603eb72f1beeb983901fb590"} Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.504610 4979 scope.go:117] "RemoveContainer" containerID="5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.504611 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6jhp" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.552735 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.565219 4979 scope.go:117] "RemoveContainer" containerID="efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.566393 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d6jhp"] Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.601781 4979 scope.go:117] "RemoveContainer" containerID="857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.637446 4979 scope.go:117] "RemoveContainer" containerID="5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8" Jan 30 23:28:51 crc kubenswrapper[4979]: E0130 23:28:51.637863 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8\": container with ID starting with 5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8 not found: ID does not exist" containerID="5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.638010 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8"} err="failed to get container status \"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8\": rpc error: code = NotFound desc = could not find container \"5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8\": container with ID starting with 5dd1dc4438fe2e6d40c77e51cb5d82971d7a5a1e1e97de0c4101be1eb980b8e8 not found: ID does not exist" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.638208 4979 scope.go:117] "RemoveContainer" containerID="efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165" Jan 30 23:28:51 crc kubenswrapper[4979]: E0130 23:28:51.638748 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165\": container with ID starting with efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165 not found: ID does not exist" containerID="efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.638778 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165"} err="failed to get container status \"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165\": rpc error: code = NotFound desc = could not find container \"efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165\": container with ID starting with efa3115e1c70a028df7a6b421d7cd24ed8e9ddaf5864567e8b3845746dde7165 not found: ID does not exist" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.638801 4979 scope.go:117] "RemoveContainer" containerID="857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707" Jan 30 23:28:51 crc kubenswrapper[4979]: E0130 23:28:51.639131 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707\": container with ID starting with 857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707 not found: ID does not exist" containerID="857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707" Jan 30 23:28:51 crc kubenswrapper[4979]: I0130 23:28:51.639154 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707"} err="failed to get container status \"857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707\": rpc error: code = NotFound desc = could not find container \"857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707\": container with ID starting with 857807fd7e7fdf45862ca578ac248c3606c3a552cdf883ddf11633ca93aa5707 not found: ID does not exist" Jan 30 23:28:53 crc kubenswrapper[4979]: I0130 23:28:53.071093 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:28:53 crc kubenswrapper[4979]: E0130 23:28:53.072614 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:28:53 crc kubenswrapper[4979]: I0130 23:28:53.086007 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" path="/var/lib/kubelet/pods/c9eb63e6-7657-4db8-90c6-f18b77fb3adc/volumes" Jan 30 23:29:05 crc kubenswrapper[4979]: I0130 23:29:05.083243 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:29:05 crc kubenswrapper[4979]: E0130 23:29:05.084965 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:29:14 crc kubenswrapper[4979]: I0130 23:29:14.062656 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-4bcmq"] Jan 30 23:29:14 crc kubenswrapper[4979]: I0130 23:29:14.071952 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-4bcmq"] Jan 30 23:29:15 crc kubenswrapper[4979]: I0130 23:29:15.086297 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39f85e7-5ff3-4843-87ca-0eaa482d5107" path="/var/lib/kubelet/pods/b39f85e7-5ff3-4843-87ca-0eaa482d5107/volumes" Jan 30 23:29:20 crc kubenswrapper[4979]: I0130 23:29:20.070265 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:29:20 crc kubenswrapper[4979]: E0130 23:29:20.071071 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:29:30 crc kubenswrapper[4979]: I0130 23:29:30.935633 4979 scope.go:117] "RemoveContainer" containerID="ff6fff980ddd92a87a7ae04fbc5182179084120991da4ee3062729859c5caa91" Jan 30 23:29:30 crc kubenswrapper[4979]: I0130 23:29:30.977968 4979 scope.go:117] "RemoveContainer" containerID="ccc43b745db314daf28ae463940cf548663352e7673aec67c6df25622cd0610d" Jan 30 23:29:35 crc kubenswrapper[4979]: I0130 23:29:35.084528 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:29:35 crc kubenswrapper[4979]: E0130 23:29:35.085551 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:29:47 crc kubenswrapper[4979]: I0130 23:29:47.070343 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:29:47 crc kubenswrapper[4979]: E0130 23:29:47.071494 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.151181 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv"] Jan 30 23:30:00 crc kubenswrapper[4979]: E0130 23:30:00.152921 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="extract-utilities" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.152954 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="extract-utilities" Jan 30 23:30:00 crc kubenswrapper[4979]: E0130 23:30:00.153085 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="registry-server" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.153136 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="registry-server" Jan 30 23:30:00 crc kubenswrapper[4979]: E0130 23:30:00.153171 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="extract-content" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.153193 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="extract-content" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.153654 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9eb63e6-7657-4db8-90c6-f18b77fb3adc" containerName="registry-server" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.155421 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.158054 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.158831 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.161266 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv"] Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.200649 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.200910 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h598b\" (UniqueName: \"kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.201479 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.303922 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.303995 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h598b\" (UniqueName: \"kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.304149 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.304756 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.310811 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.319910 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h598b\" (UniqueName: \"kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b\") pod \"collect-profiles-29496930-vplvv\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.483923 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:00 crc kubenswrapper[4979]: I0130 23:30:00.910241 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv"] Jan 30 23:30:00 crc kubenswrapper[4979]: W0130 23:30:00.918009 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcff10c30_8e1b_457e_8e33_ab3d23c24bf9.slice/crio-b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc WatchSource:0}: Error finding container b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc: Status 404 returned error can't find the container with id b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc Jan 30 23:30:01 crc kubenswrapper[4979]: I0130 23:30:01.309173 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" event={"ID":"cff10c30-8e1b-457e-8e33-ab3d23c24bf9","Type":"ContainerStarted","Data":"acd704330a20ec0f4bf6517deac2d2d7e79559f7a136972b3616d00497d97f95"} Jan 30 23:30:01 crc kubenswrapper[4979]: I0130 23:30:01.309495 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" event={"ID":"cff10c30-8e1b-457e-8e33-ab3d23c24bf9","Type":"ContainerStarted","Data":"b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc"} Jan 30 23:30:01 crc kubenswrapper[4979]: I0130 23:30:01.331665 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" podStartSLOduration=1.331621598 podStartE2EDuration="1.331621598s" podCreationTimestamp="2026-01-30 23:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:30:01.322565682 +0000 UTC m=+6597.283812725" watchObservedRunningTime="2026-01-30 23:30:01.331621598 +0000 UTC m=+6597.292868631" Jan 30 23:30:02 crc kubenswrapper[4979]: I0130 23:30:02.070926 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:30:02 crc kubenswrapper[4979]: E0130 23:30:02.071533 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:02 crc kubenswrapper[4979]: I0130 23:30:02.334132 4979 generic.go:334] "Generic (PLEG): container finished" podID="cff10c30-8e1b-457e-8e33-ab3d23c24bf9" containerID="acd704330a20ec0f4bf6517deac2d2d7e79559f7a136972b3616d00497d97f95" exitCode=0 Jan 30 23:30:02 crc kubenswrapper[4979]: I0130 23:30:02.334189 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" event={"ID":"cff10c30-8e1b-457e-8e33-ab3d23c24bf9","Type":"ContainerDied","Data":"acd704330a20ec0f4bf6517deac2d2d7e79559f7a136972b3616d00497d97f95"} Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.749474 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.769360 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume\") pod \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.769700 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h598b\" (UniqueName: \"kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b\") pod \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.769848 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume\") pod \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\" (UID: \"cff10c30-8e1b-457e-8e33-ab3d23c24bf9\") " Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.770703 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume" (OuterVolumeSpecName: "config-volume") pod "cff10c30-8e1b-457e-8e33-ab3d23c24bf9" (UID: "cff10c30-8e1b-457e-8e33-ab3d23c24bf9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.782381 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cff10c30-8e1b-457e-8e33-ab3d23c24bf9" (UID: "cff10c30-8e1b-457e-8e33-ab3d23c24bf9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.782457 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b" (OuterVolumeSpecName: "kube-api-access-h598b") pod "cff10c30-8e1b-457e-8e33-ab3d23c24bf9" (UID: "cff10c30-8e1b-457e-8e33-ab3d23c24bf9"). InnerVolumeSpecName "kube-api-access-h598b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.873094 4979 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.873136 4979 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:03 crc kubenswrapper[4979]: I0130 23:30:03.873150 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h598b\" (UniqueName: \"kubernetes.io/projected/cff10c30-8e1b-457e-8e33-ab3d23c24bf9-kube-api-access-h598b\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:04 crc kubenswrapper[4979]: I0130 23:30:04.374949 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" Jan 30 23:30:04 crc kubenswrapper[4979]: I0130 23:30:04.374940 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496930-vplvv" event={"ID":"cff10c30-8e1b-457e-8e33-ab3d23c24bf9","Type":"ContainerDied","Data":"b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc"} Jan 30 23:30:04 crc kubenswrapper[4979]: I0130 23:30:04.375090 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b497fb858d67d98e988f4b1ed4ba86defb9c311d39d986cdf8ab0683868e8ebc" Jan 30 23:30:04 crc kubenswrapper[4979]: I0130 23:30:04.403714 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r"] Jan 30 23:30:04 crc kubenswrapper[4979]: I0130 23:30:04.411465 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-4tk4r"] Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.086633 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104b2fbe-7925-4ef8-afca-adf78844b1e4" path="/var/lib/kubelet/pods/104b2fbe-7925-4ef8-afca-adf78844b1e4/volumes" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.143358 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hvtn9/must-gather-w5l49"] Jan 30 23:30:05 crc kubenswrapper[4979]: E0130 23:30:05.143870 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff10c30-8e1b-457e-8e33-ab3d23c24bf9" containerName="collect-profiles" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.143889 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff10c30-8e1b-457e-8e33-ab3d23c24bf9" containerName="collect-profiles" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.144112 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff10c30-8e1b-457e-8e33-ab3d23c24bf9" containerName="collect-profiles" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.145347 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.148608 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hvtn9"/"openshift-service-ca.crt" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.155158 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hvtn9/must-gather-w5l49"] Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.187496 4979 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hvtn9"/"default-dockercfg-lj46c" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.187594 4979 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hvtn9"/"kube-root-ca.crt" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.300990 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.301735 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmmw\" (UniqueName: \"kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.403665 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.403820 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmmw\" (UniqueName: \"kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.404161 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.436745 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmmw\" (UniqueName: \"kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw\") pod \"must-gather-w5l49\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.484117 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:30:05 crc kubenswrapper[4979]: I0130 23:30:05.989078 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hvtn9/must-gather-w5l49"] Jan 30 23:30:06 crc kubenswrapper[4979]: W0130 23:30:06.008635 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9f91df2_3eb9_4624_a492_49e62aa440f5.slice/crio-ebc787a1bed5f0d0d4a249129d73c8be1418ab3046c6011ad50c5100d751a2b6 WatchSource:0}: Error finding container ebc787a1bed5f0d0d4a249129d73c8be1418ab3046c6011ad50c5100d751a2b6: Status 404 returned error can't find the container with id ebc787a1bed5f0d0d4a249129d73c8be1418ab3046c6011ad50c5100d751a2b6 Jan 30 23:30:06 crc kubenswrapper[4979]: I0130 23:30:06.391092 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/must-gather-w5l49" event={"ID":"a9f91df2-3eb9-4624-a492-49e62aa440f5","Type":"ContainerStarted","Data":"ebc787a1bed5f0d0d4a249129d73c8be1418ab3046c6011ad50c5100d751a2b6"} Jan 30 23:30:12 crc kubenswrapper[4979]: I0130 23:30:12.450875 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/must-gather-w5l49" event={"ID":"a9f91df2-3eb9-4624-a492-49e62aa440f5","Type":"ContainerStarted","Data":"cc0954d4b7f7b4f173183a7e8e00887cd4fb5316e7c3adf635a220200ba9af70"} Jan 30 23:30:12 crc kubenswrapper[4979]: I0130 23:30:12.451496 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/must-gather-w5l49" event={"ID":"a9f91df2-3eb9-4624-a492-49e62aa440f5","Type":"ContainerStarted","Data":"bc463b2ba79389ed187340fac491edd8546c2cc8b0dee8689a7ef810a254f1bd"} Jan 30 23:30:12 crc kubenswrapper[4979]: I0130 23:30:12.480887 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hvtn9/must-gather-w5l49" podStartSLOduration=2.3170153190000002 podStartE2EDuration="7.480859412s" podCreationTimestamp="2026-01-30 23:30:05 +0000 UTC" firstStartedPulling="2026-01-30 23:30:06.0115256 +0000 UTC m=+6601.972772633" lastFinishedPulling="2026-01-30 23:30:11.175369663 +0000 UTC m=+6607.136616726" observedRunningTime="2026-01-30 23:30:12.478622542 +0000 UTC m=+6608.439869625" watchObservedRunningTime="2026-01-30 23:30:12.480859412 +0000 UTC m=+6608.442106475" Jan 30 23:30:15 crc kubenswrapper[4979]: I0130 23:30:15.075899 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:30:15 crc kubenswrapper[4979]: E0130 23:30:15.076828 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.429984 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-2g9nn"] Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.432436 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.463615 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.463807 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cschs\" (UniqueName: \"kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.565249 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.565398 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.565426 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cschs\" (UniqueName: \"kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.592773 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cschs\" (UniqueName: \"kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs\") pod \"crc-debug-2g9nn\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:16 crc kubenswrapper[4979]: I0130 23:30:16.752557 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:17 crc kubenswrapper[4979]: I0130 23:30:17.497842 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" event={"ID":"b309cab1-c68d-4026-ad93-70dbf791d23e","Type":"ContainerStarted","Data":"e7a30b0801f720a139eb91c5a4236357d9c34ac472b30206b2c2ef7e456ce932"} Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.428381 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.432382 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.528360 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.528742 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.528911 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfs5\" (UniqueName: \"kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.569084 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.631549 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.631611 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.631682 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qfs5\" (UniqueName: \"kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.631998 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.632153 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.670392 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qfs5\" (UniqueName: \"kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5\") pod \"certified-operators-g6gxx\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:21 crc kubenswrapper[4979]: I0130 23:30:21.757922 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:29 crc kubenswrapper[4979]: I0130 23:30:29.070249 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:30:29 crc kubenswrapper[4979]: E0130 23:30:29.071516 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:29 crc kubenswrapper[4979]: I0130 23:30:29.621123 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" event={"ID":"b309cab1-c68d-4026-ad93-70dbf791d23e","Type":"ContainerStarted","Data":"f040c130bed11dfc093605a6d4570cd022a74910715c781ada26034f68a76925"} Jan 30 23:30:29 crc kubenswrapper[4979]: I0130 23:30:29.640126 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" podStartSLOduration=1.331434525 podStartE2EDuration="13.640110455s" podCreationTimestamp="2026-01-30 23:30:16 +0000 UTC" firstStartedPulling="2026-01-30 23:30:16.793145355 +0000 UTC m=+6612.754392388" lastFinishedPulling="2026-01-30 23:30:29.101821285 +0000 UTC m=+6625.063068318" observedRunningTime="2026-01-30 23:30:29.639778507 +0000 UTC m=+6625.601025540" watchObservedRunningTime="2026-01-30 23:30:29.640110455 +0000 UTC m=+6625.601357488" Jan 30 23:30:30 crc kubenswrapper[4979]: I0130 23:30:30.116665 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:30 crc kubenswrapper[4979]: W0130 23:30:30.121543 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce9c11ad_8590_45a5_bff9_9694d99cf407.slice/crio-b311a24a540b0b288787ff3abf0ee65ec33e9ca3e96614974bd82db584167db3 WatchSource:0}: Error finding container b311a24a540b0b288787ff3abf0ee65ec33e9ca3e96614974bd82db584167db3: Status 404 returned error can't find the container with id b311a24a540b0b288787ff3abf0ee65ec33e9ca3e96614974bd82db584167db3 Jan 30 23:30:30 crc kubenswrapper[4979]: I0130 23:30:30.633549 4979 generic.go:334] "Generic (PLEG): container finished" podID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerID="65e223d547000178886f3ab33399241df2f6bc885d382d317198181db61e8b64" exitCode=0 Jan 30 23:30:30 crc kubenswrapper[4979]: I0130 23:30:30.633787 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerDied","Data":"65e223d547000178886f3ab33399241df2f6bc885d382d317198181db61e8b64"} Jan 30 23:30:30 crc kubenswrapper[4979]: I0130 23:30:30.634097 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerStarted","Data":"b311a24a540b0b288787ff3abf0ee65ec33e9ca3e96614974bd82db584167db3"} Jan 30 23:30:31 crc kubenswrapper[4979]: I0130 23:30:31.111593 4979 scope.go:117] "RemoveContainer" containerID="f4376d94646a15043c11ecee25a291d34f53ab6e158c8bf8bf94d2318ee02027" Jan 30 23:30:31 crc kubenswrapper[4979]: I0130 23:30:31.648447 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerStarted","Data":"293761da1028585e00c2963153d28fcb80977059db255b61aa98b8ee94cc06a8"} Jan 30 23:30:33 crc kubenswrapper[4979]: I0130 23:30:33.669303 4979 generic.go:334] "Generic (PLEG): container finished" podID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerID="293761da1028585e00c2963153d28fcb80977059db255b61aa98b8ee94cc06a8" exitCode=0 Jan 30 23:30:33 crc kubenswrapper[4979]: I0130 23:30:33.669352 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerDied","Data":"293761da1028585e00c2963153d28fcb80977059db255b61aa98b8ee94cc06a8"} Jan 30 23:30:34 crc kubenswrapper[4979]: I0130 23:30:34.680880 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerStarted","Data":"da9ebeb321a2f745c861f0da61403a2228685b64a4f898e82ad145c10dd589cb"} Jan 30 23:30:34 crc kubenswrapper[4979]: I0130 23:30:34.703916 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g6gxx" podStartSLOduration=10.174315246 podStartE2EDuration="13.70389555s" podCreationTimestamp="2026-01-30 23:30:21 +0000 UTC" firstStartedPulling="2026-01-30 23:30:30.636219679 +0000 UTC m=+6626.597466712" lastFinishedPulling="2026-01-30 23:30:34.165799993 +0000 UTC m=+6630.127047016" observedRunningTime="2026-01-30 23:30:34.698407452 +0000 UTC m=+6630.659654485" watchObservedRunningTime="2026-01-30 23:30:34.70389555 +0000 UTC m=+6630.665142583" Jan 30 23:30:41 crc kubenswrapper[4979]: I0130 23:30:41.759252 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:41 crc kubenswrapper[4979]: I0130 23:30:41.759867 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:42 crc kubenswrapper[4979]: I0130 23:30:42.807552 4979 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g6gxx" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="registry-server" probeResult="failure" output=< Jan 30 23:30:42 crc kubenswrapper[4979]: timeout: failed to connect service ":50051" within 1s Jan 30 23:30:42 crc kubenswrapper[4979]: > Jan 30 23:30:43 crc kubenswrapper[4979]: I0130 23:30:43.069484 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:30:43 crc kubenswrapper[4979]: E0130 23:30:43.070294 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:50 crc kubenswrapper[4979]: I0130 23:30:50.872836 4979 generic.go:334] "Generic (PLEG): container finished" podID="b309cab1-c68d-4026-ad93-70dbf791d23e" containerID="f040c130bed11dfc093605a6d4570cd022a74910715c781ada26034f68a76925" exitCode=0 Jan 30 23:30:50 crc kubenswrapper[4979]: I0130 23:30:50.872922 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" event={"ID":"b309cab1-c68d-4026-ad93-70dbf791d23e","Type":"ContainerDied","Data":"f040c130bed11dfc093605a6d4570cd022a74910715c781ada26034f68a76925"} Jan 30 23:30:51 crc kubenswrapper[4979]: I0130 23:30:51.822103 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:51 crc kubenswrapper[4979]: I0130 23:30:51.881819 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:51 crc kubenswrapper[4979]: I0130 23:30:51.994937 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.031726 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host\") pod \"b309cab1-c68d-4026-ad93-70dbf791d23e\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.031826 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host" (OuterVolumeSpecName: "host") pod "b309cab1-c68d-4026-ad93-70dbf791d23e" (UID: "b309cab1-c68d-4026-ad93-70dbf791d23e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.031988 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cschs\" (UniqueName: \"kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs\") pod \"b309cab1-c68d-4026-ad93-70dbf791d23e\" (UID: \"b309cab1-c68d-4026-ad93-70dbf791d23e\") " Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.032512 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-2g9nn"] Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.032802 4979 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b309cab1-c68d-4026-ad93-70dbf791d23e-host\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.040100 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-2g9nn"] Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.046256 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs" (OuterVolumeSpecName: "kube-api-access-cschs") pod "b309cab1-c68d-4026-ad93-70dbf791d23e" (UID: "b309cab1-c68d-4026-ad93-70dbf791d23e"). InnerVolumeSpecName "kube-api-access-cschs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.133744 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cschs\" (UniqueName: \"kubernetes.io/projected/b309cab1-c68d-4026-ad93-70dbf791d23e-kube-api-access-cschs\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.148696 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:30:52 crc kubenswrapper[4979]: E0130 23:30:52.149156 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b309cab1-c68d-4026-ad93-70dbf791d23e" containerName="container-00" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.149174 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="b309cab1-c68d-4026-ad93-70dbf791d23e" containerName="container-00" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.149376 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="b309cab1-c68d-4026-ad93-70dbf791d23e" containerName="container-00" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.150780 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.177395 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.235791 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.236112 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2nq\" (UniqueName: \"kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.236204 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.338021 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.338188 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.338318 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq2nq\" (UniqueName: \"kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.338614 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.338619 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.361886 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq2nq\" (UniqueName: \"kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq\") pod \"redhat-operators-xdjmd\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.467707 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.911637 4979 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7a30b0801f720a139eb91c5a4236357d9c34ac472b30206b2c2ef7e456ce932" Jan 30 23:30:52 crc kubenswrapper[4979]: I0130 23:30:52.911690 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-2g9nn" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.081792 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b309cab1-c68d-4026-ad93-70dbf791d23e" path="/var/lib/kubelet/pods/b309cab1-c68d-4026-ad93-70dbf791d23e/volumes" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.171080 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:30:53 crc kubenswrapper[4979]: W0130 23:30:53.192177 4979 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecf6ba99_f760_491b_95ed_71ae1b9e34b4.slice/crio-0f04d02aed3ff33e4e250b34e965b3ba25002ebf42318776a71576556b91ebe1 WatchSource:0}: Error finding container 0f04d02aed3ff33e4e250b34e965b3ba25002ebf42318776a71576556b91ebe1: Status 404 returned error can't find the container with id 0f04d02aed3ff33e4e250b34e965b3ba25002ebf42318776a71576556b91ebe1 Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.268217 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-c5qv6"] Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.270077 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.359543 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.359628 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwg72\" (UniqueName: \"kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.460899 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.460982 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwg72\" (UniqueName: \"kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.461076 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.487938 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwg72\" (UniqueName: \"kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72\") pod \"crc-debug-c5qv6\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.606274 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.921595 4979 generic.go:334] "Generic (PLEG): container finished" podID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerID="fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323" exitCode=0 Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.922118 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerDied","Data":"fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323"} Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.922164 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerStarted","Data":"0f04d02aed3ff33e4e250b34e965b3ba25002ebf42318776a71576556b91ebe1"} Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.927769 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" event={"ID":"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab","Type":"ContainerStarted","Data":"b2492877ada34dfefd34b3d39354bafbacf62246eb1096e77de813931b9c9bec"} Jan 30 23:30:53 crc kubenswrapper[4979]: I0130 23:30:53.927817 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" event={"ID":"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab","Type":"ContainerStarted","Data":"899a2958689ea9327648a2a88c41b24151aabda29b83cbec186214ba78ebecbf"} Jan 30 23:30:54 crc kubenswrapper[4979]: I0130 23:30:54.008088 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-c5qv6"] Jan 30 23:30:54 crc kubenswrapper[4979]: I0130 23:30:54.028573 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hvtn9/crc-debug-c5qv6"] Jan 30 23:30:54 crc kubenswrapper[4979]: I0130 23:30:54.957081 4979 generic.go:334] "Generic (PLEG): container finished" podID="e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" containerID="b2492877ada34dfefd34b3d39354bafbacf62246eb1096e77de813931b9c9bec" exitCode=1 Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.055626 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.769063 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host\") pod \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.769244 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwg72\" (UniqueName: \"kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72\") pod \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\" (UID: \"e3c7f57f-ff39-482a-bb7a-ca4882cf8fab\") " Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.782220 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host" (OuterVolumeSpecName: "host") pod "e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" (UID: "e3c7f57f-ff39-482a-bb7a-ca4882cf8fab"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.810413 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72" (OuterVolumeSpecName: "kube-api-access-qwg72") pod "e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" (UID: "e3c7f57f-ff39-482a-bb7a-ca4882cf8fab"). InnerVolumeSpecName "kube-api-access-qwg72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.821364 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.821599 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g6gxx" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="registry-server" containerID="cri-o://da9ebeb321a2f745c861f0da61403a2228685b64a4f898e82ad145c10dd589cb" gracePeriod=2 Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.874050 4979 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-host\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.874419 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwg72\" (UniqueName: \"kubernetes.io/projected/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab-kube-api-access-qwg72\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.998939 4979 generic.go:334] "Generic (PLEG): container finished" podID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerID="da9ebeb321a2f745c861f0da61403a2228685b64a4f898e82ad145c10dd589cb" exitCode=0 Jan 30 23:30:55 crc kubenswrapper[4979]: I0130 23:30:55.999136 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerDied","Data":"da9ebeb321a2f745c861f0da61403a2228685b64a4f898e82ad145c10dd589cb"} Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.001394 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerStarted","Data":"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044"} Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.010703 4979 scope.go:117] "RemoveContainer" containerID="b2492877ada34dfefd34b3d39354bafbacf62246eb1096e77de813931b9c9bec" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.010873 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/crc-debug-c5qv6" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.069966 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:30:56 crc kubenswrapper[4979]: E0130 23:30:56.070288 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.376815 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.489789 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content\") pod \"ce9c11ad-8590-45a5-bff9-9694d99cf407\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.489847 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qfs5\" (UniqueName: \"kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5\") pod \"ce9c11ad-8590-45a5-bff9-9694d99cf407\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.489896 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities\") pod \"ce9c11ad-8590-45a5-bff9-9694d99cf407\" (UID: \"ce9c11ad-8590-45a5-bff9-9694d99cf407\") " Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.491233 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities" (OuterVolumeSpecName: "utilities") pod "ce9c11ad-8590-45a5-bff9-9694d99cf407" (UID: "ce9c11ad-8590-45a5-bff9-9694d99cf407"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.500347 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5" (OuterVolumeSpecName: "kube-api-access-5qfs5") pod "ce9c11ad-8590-45a5-bff9-9694d99cf407" (UID: "ce9c11ad-8590-45a5-bff9-9694d99cf407"). InnerVolumeSpecName "kube-api-access-5qfs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.542186 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce9c11ad-8590-45a5-bff9-9694d99cf407" (UID: "ce9c11ad-8590-45a5-bff9-9694d99cf407"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.592300 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.592337 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qfs5\" (UniqueName: \"kubernetes.io/projected/ce9c11ad-8590-45a5-bff9-9694d99cf407-kube-api-access-5qfs5\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:56 crc kubenswrapper[4979]: I0130 23:30:56.592348 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9c11ad-8590-45a5-bff9-9694d99cf407-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.022562 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6gxx" Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.023775 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6gxx" event={"ID":"ce9c11ad-8590-45a5-bff9-9694d99cf407","Type":"ContainerDied","Data":"b311a24a540b0b288787ff3abf0ee65ec33e9ca3e96614974bd82db584167db3"} Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.023819 4979 scope.go:117] "RemoveContainer" containerID="da9ebeb321a2f745c861f0da61403a2228685b64a4f898e82ad145c10dd589cb" Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.084687 4979 scope.go:117] "RemoveContainer" containerID="293761da1028585e00c2963153d28fcb80977059db255b61aa98b8ee94cc06a8" Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.094462 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" path="/var/lib/kubelet/pods/e3c7f57f-ff39-482a-bb7a-ca4882cf8fab/volumes" Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.095617 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.103966 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g6gxx"] Jan 30 23:30:57 crc kubenswrapper[4979]: I0130 23:30:57.124563 4979 scope.go:117] "RemoveContainer" containerID="65e223d547000178886f3ab33399241df2f6bc885d382d317198181db61e8b64" Jan 30 23:30:59 crc kubenswrapper[4979]: I0130 23:30:59.085934 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" path="/var/lib/kubelet/pods/ce9c11ad-8590-45a5-bff9-9694d99cf407/volumes" Jan 30 23:31:02 crc kubenswrapper[4979]: I0130 23:31:02.080412 4979 generic.go:334] "Generic (PLEG): container finished" podID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerID="1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044" exitCode=0 Jan 30 23:31:02 crc kubenswrapper[4979]: I0130 23:31:02.080457 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerDied","Data":"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044"} Jan 30 23:31:03 crc kubenswrapper[4979]: I0130 23:31:03.127709 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerStarted","Data":"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672"} Jan 30 23:31:03 crc kubenswrapper[4979]: I0130 23:31:03.154322 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xdjmd" podStartSLOduration=2.556666116 podStartE2EDuration="11.154303849s" podCreationTimestamp="2026-01-30 23:30:52 +0000 UTC" firstStartedPulling="2026-01-30 23:30:53.925255264 +0000 UTC m=+6649.886502297" lastFinishedPulling="2026-01-30 23:31:02.522892997 +0000 UTC m=+6658.484140030" observedRunningTime="2026-01-30 23:31:03.150451414 +0000 UTC m=+6659.111698447" watchObservedRunningTime="2026-01-30 23:31:03.154303849 +0000 UTC m=+6659.115550882" Jan 30 23:31:10 crc kubenswrapper[4979]: I0130 23:31:10.072236 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:31:10 crc kubenswrapper[4979]: E0130 23:31:10.072999 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:31:12 crc kubenswrapper[4979]: I0130 23:31:12.468809 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:12 crc kubenswrapper[4979]: I0130 23:31:12.469190 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:12 crc kubenswrapper[4979]: I0130 23:31:12.519677 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:13 crc kubenswrapper[4979]: I0130 23:31:13.306615 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:13 crc kubenswrapper[4979]: I0130 23:31:13.366043 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.250211 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xdjmd" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="registry-server" containerID="cri-o://cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672" gracePeriod=2 Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.778797 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.940257 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content\") pod \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.940496 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq2nq\" (UniqueName: \"kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq\") pod \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.940557 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities\") pod \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\" (UID: \"ecf6ba99-f760-491b-95ed-71ae1b9e34b4\") " Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.941551 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities" (OuterVolumeSpecName: "utilities") pod "ecf6ba99-f760-491b-95ed-71ae1b9e34b4" (UID: "ecf6ba99-f760-491b-95ed-71ae1b9e34b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:31:15 crc kubenswrapper[4979]: I0130 23:31:15.950587 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq" (OuterVolumeSpecName: "kube-api-access-sq2nq") pod "ecf6ba99-f760-491b-95ed-71ae1b9e34b4" (UID: "ecf6ba99-f760-491b-95ed-71ae1b9e34b4"). InnerVolumeSpecName "kube-api-access-sq2nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.043379 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.043420 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq2nq\" (UniqueName: \"kubernetes.io/projected/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-kube-api-access-sq2nq\") on node \"crc\" DevicePath \"\"" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.072124 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecf6ba99-f760-491b-95ed-71ae1b9e34b4" (UID: "ecf6ba99-f760-491b-95ed-71ae1b9e34b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.145646 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6ba99-f760-491b-95ed-71ae1b9e34b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.293873 4979 generic.go:334] "Generic (PLEG): container finished" podID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerID="cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672" exitCode=0 Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.294217 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdjmd" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.294273 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerDied","Data":"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672"} Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.297602 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdjmd" event={"ID":"ecf6ba99-f760-491b-95ed-71ae1b9e34b4","Type":"ContainerDied","Data":"0f04d02aed3ff33e4e250b34e965b3ba25002ebf42318776a71576556b91ebe1"} Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.297707 4979 scope.go:117] "RemoveContainer" containerID="cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.366679 4979 scope.go:117] "RemoveContainer" containerID="1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.401083 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.413589 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xdjmd"] Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.430981 4979 scope.go:117] "RemoveContainer" containerID="fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.464399 4979 scope.go:117] "RemoveContainer" containerID="cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672" Jan 30 23:31:16 crc kubenswrapper[4979]: E0130 23:31:16.466655 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672\": container with ID starting with cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672 not found: ID does not exist" containerID="cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.466687 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672"} err="failed to get container status \"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672\": rpc error: code = NotFound desc = could not find container \"cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672\": container with ID starting with cd424181acd8bc2eb917f7d437a3ba8e2c95e60272ac837bd683160fe906b672 not found: ID does not exist" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.466712 4979 scope.go:117] "RemoveContainer" containerID="1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044" Jan 30 23:31:16 crc kubenswrapper[4979]: E0130 23:31:16.467265 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044\": container with ID starting with 1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044 not found: ID does not exist" containerID="1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.467425 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044"} err="failed to get container status \"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044\": rpc error: code = NotFound desc = could not find container \"1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044\": container with ID starting with 1d0cc29b3533e6819a1db13014a85680211a3d8cdb29be5caf5455c77ee98044 not found: ID does not exist" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.467635 4979 scope.go:117] "RemoveContainer" containerID="fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323" Jan 30 23:31:16 crc kubenswrapper[4979]: E0130 23:31:16.468023 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323\": container with ID starting with fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323 not found: ID does not exist" containerID="fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323" Jan 30 23:31:16 crc kubenswrapper[4979]: I0130 23:31:16.468153 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323"} err="failed to get container status \"fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323\": rpc error: code = NotFound desc = could not find container \"fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323\": container with ID starting with fced1f7e138c5de1e9d5b20a1a461921713765c2a31483330a26d8167fd2d323 not found: ID does not exist" Jan 30 23:31:17 crc kubenswrapper[4979]: I0130 23:31:17.092407 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" path="/var/lib/kubelet/pods/ecf6ba99-f760-491b-95ed-71ae1b9e34b4/volumes" Jan 30 23:31:24 crc kubenswrapper[4979]: I0130 23:31:24.069344 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:31:24 crc kubenswrapper[4979]: E0130 23:31:24.070115 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:31:39 crc kubenswrapper[4979]: I0130 23:31:39.070349 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:31:39 crc kubenswrapper[4979]: I0130 23:31:39.517047 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6"} Jan 30 23:31:46 crc kubenswrapper[4979]: I0130 23:31:46.699369 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_accadf60-186b-408a-94cb-aae9319d58e9/init-config-reloader/0.log" Jan 30 23:31:46 crc kubenswrapper[4979]: I0130 23:31:46.888872 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_accadf60-186b-408a-94cb-aae9319d58e9/alertmanager/0.log" Jan 30 23:31:46 crc kubenswrapper[4979]: I0130 23:31:46.895903 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_accadf60-186b-408a-94cb-aae9319d58e9/init-config-reloader/0.log" Jan 30 23:31:46 crc kubenswrapper[4979]: I0130 23:31:46.961827 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_accadf60-186b-408a-94cb-aae9319d58e9/config-reloader/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.084945 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d46697d68-frccf_58f76ba6-bd87-414d-b226-07f7a8705fea/barbican-api/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.131588 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d46697d68-frccf_58f76ba6-bd87-414d-b226-07f7a8705fea/barbican-api-log/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.259341 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7ff7d98446-pts46_0e21af86-2d45-409c-b692-97bc60c3d806/barbican-keystone-listener/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.306692 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7ff7d98446-pts46_0e21af86-2d45-409c-b692-97bc60c3d806/barbican-keystone-listener-log/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.438267 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c85d579b5-svwjh_fd72817a-eff0-4fac-ba2b-040115385897/barbican-worker/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.454296 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c85d579b5-svwjh_fd72817a-eff0-4fac-ba2b-040115385897/barbican-worker-log/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.623493 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05655350-25f6-4610-9ec7-f492b4691d5d/ceilometer-central-agent/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.633065 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05655350-25f6-4610-9ec7-f492b4691d5d/ceilometer-notification-agent/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.662796 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05655350-25f6-4610-9ec7-f492b4691d5d/proxy-httpd/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.790402 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_05655350-25f6-4610-9ec7-f492b4691d5d/sg-core/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.832822 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d24af8b-b86a-4604-82a5-e3d014dba7b5/cinder-api/0.log" Jan 30 23:31:47 crc kubenswrapper[4979]: I0130 23:31:47.854208 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d24af8b-b86a-4604-82a5-e3d014dba7b5/cinder-api-log/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.047195 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_c3e02f71-2ffc-45bb-9344-28ff1640cffd/probe/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.158847 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_c3e02f71-2ffc-45bb-9344-28ff1640cffd/cinder-backup/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.234887 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_88f999da-53cb-4370-ab43-2a6623aa6d51/probe/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.257000 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_88f999da-53cb-4370-ab43-2a6623aa6d51/cinder-scheduler/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.413989 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_3aa75164-0d7b-4b9a-a21d-2c5834956114/cinder-volume/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.467671 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_3aa75164-0d7b-4b9a-a21d-2c5834956114/probe/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.564212 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-689759d469-jqhxp_d2693393-b0b5-4009-9c45-80d154fa756c/init/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.785948 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-689759d469-jqhxp_d2693393-b0b5-4009-9c45-80d154fa756c/dnsmasq-dns/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.786421 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-689759d469-jqhxp_d2693393-b0b5-4009-9c45-80d154fa756c/init/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.815458 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_67c81730-0360-4ee7-a657-774bab3e5ce1/glance-httpd/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.953000 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_67c81730-0360-4ee7-a657-774bab3e5ce1/glance-log/0.log" Jan 30 23:31:48 crc kubenswrapper[4979]: I0130 23:31:48.996673 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a/glance-httpd/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.035727 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0ea368fb-0d2b-4cdd-8edd-bf1cb3b29d4a/glance-log/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.220472 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-90f9-account-create-update-f758c_3b4b69e9-3082-4eac-a4c9-2fd308ed75bd/mariadb-account-create-update/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.229291 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-d7d58dff5-tjkx9_608b4783-d5c9-467f-9a08-9cd6bc0f0fa9/heat-api/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.422829 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-db-create-vjhff_e764deeb-609a-4c01-8e75-729988b54849/mariadb-database-create/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.455922 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7bf56f7748-njbm7_d72b8dbc-f35e-4aea-ab91-75be38745fd1/heat-cfnapi/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.652835 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-db-sync-lhhst_4e6a3c61-50ef-48b5-bcc0-ab3374693979/heat-db-sync/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.740808 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5998d4684d-smdfx_b2612383-27a6-4663-b45a-0aac825bf021/heat-engine/0.log" Jan 30 23:31:49 crc kubenswrapper[4979]: I0130 23:31:49.929284 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-9b688f5c-2xlsg_d199303b-d615-40f9-a420-bfde359d8392/horizon-log/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.001070 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-9b688f5c-2xlsg_d199303b-d615-40f9-a420-bfde359d8392/horizon/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.042703 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-90f9-account-create-update-f758c"] Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.063226 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-vjhff"] Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.072818 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-vjhff"] Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.079404 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-90f9-account-create-update-f758c"] Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.208151 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5b988cf8cf-m4gbb_564a9679-372a-47bb-be3d-70b37a775724/keystone-api/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.223795 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0a4e1a15-bf2b-4e60-9a91-1bae92f52fa7/kube-state-metrics/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.350992 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_74f9350b-6f51-40b4-85a5-be1ffad9eb0c/adoption/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.585276 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-998b6c5dc-s8h29_633158e6-5d40-43e2-a2c9-94e611b32d3c/neutron-api/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.678241 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-998b6c5dc-s8h29_633158e6-5d40-43e2-a2c9-94e611b32d3c/neutron-httpd/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.893266 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9ce01f4b-19ef-4c0b-ab4c-f76e96297fde/nova-api-log/0.log" Jan 30 23:31:50 crc kubenswrapper[4979]: I0130 23:31:50.912351 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9ce01f4b-19ef-4c0b-ab4c-f76e96297fde/nova-api-api/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.045480 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_274c05f8-cb23-41d5-b911-5d13bac207a0/nova-cell0-conductor-conductor/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.080639 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4b69e9-3082-4eac-a4c9-2fd308ed75bd" path="/var/lib/kubelet/pods/3b4b69e9-3082-4eac-a4c9-2fd308ed75bd/volumes" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.081388 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e764deeb-609a-4c01-8e75-729988b54849" path="/var/lib/kubelet/pods/e764deeb-609a-4c01-8e75-729988b54849/volumes" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.220121 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3ab6e2f8-0934-41a0-b35e-0c6e0b5dacd1/nova-cell1-conductor-conductor/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.318762 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f517549b-f450-42f3-9445-6b45713a7328/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.445647 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a1269d92-1612-453c-8e80-29981ced4aca/nova-metadata-metadata/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.472927 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a1269d92-1612-453c-8e80-29981ced4aca/nova-metadata-log/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.696546 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-657b9576cf-gswsb_bc255f37-2650-4c57-b4d0-4709be5a5d25/init/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.730423 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b6d75777-1cab-4bbc-ab03-361b03c488f4/nova-scheduler-scheduler/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.892169 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-657b9576cf-gswsb_bc255f37-2650-4c57-b4d0-4709be5a5d25/init/0.log" Jan 30 23:31:51 crc kubenswrapper[4979]: I0130 23:31:51.947705 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-657b9576cf-gswsb_bc255f37-2650-4c57-b4d0-4709be5a5d25/octavia-api-provider-agent/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.105249 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-657b9576cf-gswsb_bc255f37-2650-4c57-b4d0-4709be5a5d25/octavia-api/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.156564 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-pbxbw_e7a38a33-332b-484f-a620-5ecc2b52d9d8/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.302993 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-pbxbw_e7a38a33-332b-484f-a620-5ecc2b52d9d8/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.346995 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-pbxbw_e7a38a33-332b-484f-a620-5ecc2b52d9d8/octavia-healthmanager/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.396654 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-89w6g_82154ec9-1201-41a2-a0f2-904b2db3c497/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.698699 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-89w6g_82154ec9-1201-41a2-a0f2-904b2db3c497/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.727401 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-89w6g_82154ec9-1201-41a2-a0f2-904b2db3c497/octavia-housekeeping/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.769634 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-p7ttv_e59aa6da-4048-4cf0-add7-cb98472425cb/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.909114 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-p7ttv_e59aa6da-4048-4cf0-add7-cb98472425cb/init/0.log" Jan 30 23:31:52 crc kubenswrapper[4979]: I0130 23:31:52.942323 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-p7ttv_e59aa6da-4048-4cf0-add7-cb98472425cb/octavia-rsyslog/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.003598 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-m8s2f_81ae9dc0-5b82-4990-878a-9570fc849c26/init/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.181955 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-m8s2f_81ae9dc0-5b82-4990-878a-9570fc849c26/init/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.273689 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7dad08bf-c93b-417a-aeef-633e774fffcc/mysql-bootstrap/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.300701 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-m8s2f_81ae9dc0-5b82-4990-878a-9570fc849c26/octavia-worker/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.530052 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7dad08bf-c93b-417a-aeef-633e774fffcc/mysql-bootstrap/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.535258 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7dad08bf-c93b-417a-aeef-633e774fffcc/galera/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.798437 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_20a89776-fed1-4db4-80e6-11cfdb8f810b/mysql-bootstrap/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.907156 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_20a89776-fed1-4db4-80e6-11cfdb8f810b/mysql-bootstrap/0.log" Jan 30 23:31:53 crc kubenswrapper[4979]: I0130 23:31:53.965396 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_20a89776-fed1-4db4-80e6-11cfdb8f810b/galera/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.067082 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_278b06cd-52af-4fce-b0e8-fd7f870b0564/openstackclient/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.198397 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kssd2_2524172b-c864-4a7f-8c66-ffd219fa7be6/ovn-controller/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.314070 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-56vn2_927cfb5e-5147-4154-aad7-bd9d4aae47b2/openstack-network-exporter/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.722437 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-54q6d_5f8d6c92-62f8-427c-8208-cf3ba6d98af7/ovsdb-server-init/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.732685 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-54q6d_5f8d6c92-62f8-427c-8208-cf3ba6d98af7/ovsdb-server/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.733456 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-54q6d_5f8d6c92-62f8-427c-8208-cf3ba6d98af7/ovsdb-server-init/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.738878 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-54q6d_5f8d6c92-62f8-427c-8208-cf3ba6d98af7/ovs-vswitchd/0.log" Jan 30 23:31:54 crc kubenswrapper[4979]: I0130 23:31:54.935695 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_43991b8d-f7aa-479c-9d38-e19114106e81/adoption/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.077099 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89760273-d9f8-4c51-8af9-4a651cadc92c/openstack-network-exporter/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.173702 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89760273-d9f8-4c51-8af9-4a651cadc92c/ovn-northd/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.311129 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6afaef21-c973-4ec1-ae90-f3c9b603f713/openstack-network-exporter/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.374958 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6afaef21-c973-4ec1-ae90-f3c9b603f713/ovsdbserver-nb/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.470209 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_7fbe256e-5861-4bd2-b76d-a53f79b48380/openstack-network-exporter/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.611404 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_7fbe256e-5861-4bd2-b76d-a53f79b48380/ovsdbserver-nb/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.683630 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_977a1b80-05e8-4d3c-acbb-e9ea09b98ab0/openstack-network-exporter/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.743901 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_977a1b80-05e8-4d3c-acbb-e9ea09b98ab0/ovsdbserver-nb/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.896873 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e971ad9f-b09c-4504-8caf-f6c9f0801e00/openstack-network-exporter/0.log" Jan 30 23:31:55 crc kubenswrapper[4979]: I0130 23:31:55.975055 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e971ad9f-b09c-4504-8caf-f6c9f0801e00/ovsdbserver-sb/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.051317 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_755c668a-a4c9-4a52-901d-338208af4efb/openstack-network-exporter/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.135493 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_755c668a-a4c9-4a52-901d-338208af4efb/ovsdbserver-sb/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.258059 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_b0076344-a5b2-4fef-8a6f-28b6194b850e/openstack-network-exporter/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.302255 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_b0076344-a5b2-4fef-8a6f-28b6194b850e/ovsdbserver-sb/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.468047 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8565876748-g76rq_019fe9ef-3972-45a8-82ec-8b566d9a1c58/placement-api/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.556248 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8565876748-g76rq_019fe9ef-3972-45a8-82ec-8b566d9a1c58/placement-log/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.671731 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0f8756ad-bff0-4f0d-9444-cbba47490d33/init-config-reloader/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.749536 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_4a63b89d-496c-4f6e-8ba3-a18de60230af/memcached/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.830360 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0f8756ad-bff0-4f0d-9444-cbba47490d33/config-reloader/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.846131 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0f8756ad-bff0-4f0d-9444-cbba47490d33/prometheus/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.879054 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0f8756ad-bff0-4f0d-9444-cbba47490d33/init-config-reloader/0.log" Jan 30 23:31:56 crc kubenswrapper[4979]: I0130 23:31:56.885717 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0f8756ad-bff0-4f0d-9444-cbba47490d33/thanos-sidecar/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.054498 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_291b372c-0448-4bc4-88a4-e61a412ba45a/setup-container/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.219423 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_291b372c-0448-4bc4-88a4-e61a412ba45a/setup-container/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.241245 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_291b372c-0448-4bc4-88a4-e61a412ba45a/rabbitmq/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.245672 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c14c3367-d6a7-443a-9c15-913f73eac121/setup-container/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.421881 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c14c3367-d6a7-443a-9c15-913f73eac121/rabbitmq/0.log" Jan 30 23:31:57 crc kubenswrapper[4979]: I0130 23:31:57.442869 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c14c3367-d6a7-443a-9c15-913f73eac121/setup-container/0.log" Jan 30 23:32:03 crc kubenswrapper[4979]: I0130 23:32:03.046424 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-lhhst"] Jan 30 23:32:03 crc kubenswrapper[4979]: I0130 23:32:03.057171 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-lhhst"] Jan 30 23:32:03 crc kubenswrapper[4979]: I0130 23:32:03.083659 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6a3c61-50ef-48b5-bcc0-ab3374693979" path="/var/lib/kubelet/pods/4e6a3c61-50ef-48b5-bcc0-ab3374693979/volumes" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.281474 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-fc589b45f-r2mb8_dcd08638-857d-40cd-a92c-b6dcef0bc329/manager/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.440420 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-5k7wd_9134e6d2-b638-49be-9612-be12250e0a6d/manager/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.490602 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-787499fbb-p95sz_11771b88-abd2-436e-a95c-5113a5bae88b/manager/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.655944 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/util/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.941868 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/pull/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.949792 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/util/0.log" Jan 30 23:32:17 crc kubenswrapper[4979]: I0130 23:32:17.976002 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/pull/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.162914 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/pull/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.199603 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/util/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.224230 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fc1caef58c83bc404d0e2e9408dc7258e81b6d0604b7412ddb2590f61d7f4cf_b788bb72-addf-4df0-9fa8-e27fb8e1e10a/extract/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.408808 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-65dc6c8d9c-h59f2_0c1c6a5c-c91b-4f9b-bf07-c2fd0472f1fc/manager/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.536228 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bfc9d4d48-zqjfh_8893a935-e9c7-4d38-ae0c-17a94445475f/manager/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.623267 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-5pmpx_07393de3-4dbb-4de1-a7fc-49785a623de2/manager/0.log" Jan 30 23:32:18 crc kubenswrapper[4979]: I0130 23:32:18.870157 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6fd9bbb6f6-lrqnv_9c8cf87b-4069-497d-9fcc-3b7be476ed4d/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.132738 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7d96d95959-5s8xm_7f396cc2-4739-4401-9319-36881d4f449d/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.157412 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-9q469_5966d922-4db9-40f7-baf1-5624f1a033d6/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.185482 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-64469b487f-g6pnt_39f45c61-20b7-4d98-98af-526018a240c1/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.461419 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-6bb56_777d41f5-6e7f-4099-9f6f-aceaf0b972da/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.562999 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-576995988b-v774d_31481495-f181-449a-887e-ed58bf88c783/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.824572 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5644b66645-lz8dw_1fe4c32c-a00c-41e9-a15d-d1ff4cedf9f7/manager/0.log" Jan 30 23:32:19 crc kubenswrapper[4979]: I0130 23:32:19.904459 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-694c6dcf95-58s6k_73527aaf-5de3-4a3e-aa4c-f2ac98e5be11/manager/0.log" Jan 30 23:32:20 crc kubenswrapper[4979]: I0130 23:32:20.027498 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d5tdxg_c9710f6a-7b47-4f62-bc11-9d5727fdb01f/manager/0.log" Jan 30 23:32:20 crc kubenswrapper[4979]: I0130 23:32:20.207617 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7c7d885c49-dmwtw_9a874b50-c515-45d3-8562-05532a2c5adc/operator/0.log" Jan 30 23:32:20 crc kubenswrapper[4979]: I0130 23:32:20.451412 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jl5wf_bb59579b-3a3c-4ae9-b3fe-d4231a17e050/registry-server/0.log" Jan 30 23:32:20 crc kubenswrapper[4979]: I0130 23:32:20.741835 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-7f98k_cf2e278a-e0cb-4505-bd08-38c02155a632/manager/0.log" Jan 30 23:32:20 crc kubenswrapper[4979]: I0130 23:32:20.817889 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-6f7vv_82a19f5f-9a94-4b08-8795-22fce21897bf/manager/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.327363 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-566d8d7445-78f4b_c15b97e5-3fe4-4f42-9501-b4c7c083bdbb/manager/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.361915 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-r4rcx_788f4d92-590f-44b1-8b93-a15b9f88b052/operator/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.686475 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-57br8_baa9dff2-93f9-4590-a86d-cd891b4273f2/manager/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.746629 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-69484b8d9d-nc5fg_bf959f71-8af9-4121-888f-13207cc2e1d0/manager/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.787212 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b5794dddd-fgq92_cea237e7-6ca9-4dcd-b5d6-d471898e2c09/manager/0.log" Jan 30 23:32:21 crc kubenswrapper[4979]: I0130 23:32:21.837274 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-586b95b788-dpkrg_2487dbd3-ca49-4b26-99e3-2c858b549944/manager/0.log" Jan 30 23:32:31 crc kubenswrapper[4979]: I0130 23:32:31.257670 4979 scope.go:117] "RemoveContainer" containerID="c67c788d4520c8623a63e6f6ba906d43acdb20876d211c331df4d5a9e42eee7e" Jan 30 23:32:31 crc kubenswrapper[4979]: I0130 23:32:31.310092 4979 scope.go:117] "RemoveContainer" containerID="58163cfecf1e6d2fed441241595c3b510d0c4b0a9adfab7fece442a3238e97f0" Jan 30 23:32:31 crc kubenswrapper[4979]: I0130 23:32:31.338486 4979 scope.go:117] "RemoveContainer" containerID="4bf379d2ade37e9d1e0a22eab217d802e3a8854982275953c74bf158307b26eb" Jan 30 23:32:43 crc kubenswrapper[4979]: I0130 23:32:43.000208 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rthrv_6ebf43de-28a1-4cb6-a008-7bcc970b96ac/control-plane-machine-set-operator/0.log" Jan 30 23:32:43 crc kubenswrapper[4979]: I0130 23:32:43.213615 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mr5l2_7616472e-472c-4dfa-bf69-97d784e1e42f/kube-rbac-proxy/0.log" Jan 30 23:32:43 crc kubenswrapper[4979]: I0130 23:32:43.269749 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mr5l2_7616472e-472c-4dfa-bf69-97d784e1e42f/machine-api-operator/0.log" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.241837 4979 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243003 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="extract-content" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243077 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="extract-content" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243099 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243109 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243145 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="extract-utilities" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243156 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="extract-utilities" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243181 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="extract-content" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243191 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="extract-content" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243203 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" containerName="container-00" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243213 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" containerName="container-00" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243226 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243235 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: E0130 23:32:47.243258 4979 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="extract-utilities" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243267 4979 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="extract-utilities" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243602 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9c11ad-8590-45a5-bff9-9694d99cf407" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243640 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf6ba99-f760-491b-95ed-71ae1b9e34b4" containerName="registry-server" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.243664 4979 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c7f57f-ff39-482a-bb7a-ca4882cf8fab" containerName="container-00" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.246232 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.267693 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.392175 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kc9\" (UniqueName: \"kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.392328 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.392594 4979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.495283 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.495396 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.495468 4979 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kc9\" (UniqueName: \"kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.495946 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.496303 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.516314 4979 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kc9\" (UniqueName: \"kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9\") pod \"redhat-marketplace-qwqd2\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:47 crc kubenswrapper[4979]: I0130 23:32:47.572120 4979 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:48 crc kubenswrapper[4979]: I0130 23:32:48.103544 4979 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:32:48 crc kubenswrapper[4979]: I0130 23:32:48.234160 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerStarted","Data":"2a447b25143664a67304d059de23e8f6fcf4bd430097e0e4ed2adaf1675e6d7c"} Jan 30 23:32:49 crc kubenswrapper[4979]: I0130 23:32:49.247842 4979 generic.go:334] "Generic (PLEG): container finished" podID="760f6d0d-ff72-4a55-957d-71d0d72a8fe3" containerID="a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3" exitCode=0 Jan 30 23:32:49 crc kubenswrapper[4979]: I0130 23:32:49.248311 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerDied","Data":"a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3"} Jan 30 23:32:50 crc kubenswrapper[4979]: I0130 23:32:50.259652 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerStarted","Data":"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7"} Jan 30 23:32:51 crc kubenswrapper[4979]: I0130 23:32:51.271584 4979 generic.go:334] "Generic (PLEG): container finished" podID="760f6d0d-ff72-4a55-957d-71d0d72a8fe3" containerID="1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7" exitCode=0 Jan 30 23:32:51 crc kubenswrapper[4979]: I0130 23:32:51.271901 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerDied","Data":"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7"} Jan 30 23:32:52 crc kubenswrapper[4979]: I0130 23:32:52.282368 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerStarted","Data":"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee"} Jan 30 23:32:52 crc kubenswrapper[4979]: I0130 23:32:52.305767 4979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qwqd2" podStartSLOduration=2.920131849 podStartE2EDuration="5.30574957s" podCreationTimestamp="2026-01-30 23:32:47 +0000 UTC" firstStartedPulling="2026-01-30 23:32:49.25071219 +0000 UTC m=+6765.211959243" lastFinishedPulling="2026-01-30 23:32:51.636329931 +0000 UTC m=+6767.597576964" observedRunningTime="2026-01-30 23:32:52.298863014 +0000 UTC m=+6768.260110057" watchObservedRunningTime="2026-01-30 23:32:52.30574957 +0000 UTC m=+6768.266996603" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.051757 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-f88tb_99fcd41b-c557-4bf0-abbb-b189f4aaaf41/cert-manager-controller/0.log" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.187985 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-x57ft_34da3314-5047-419b-8c7b-927cc2f00d8c/cert-manager-cainjector/0.log" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.263065 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-pw6nw_7670008a-1d21-4255-8148-e85ac90a90d4/cert-manager-webhook/0.log" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.573194 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.573246 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:57 crc kubenswrapper[4979]: I0130 23:32:57.631674 4979 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:58 crc kubenswrapper[4979]: I0130 23:32:58.379892 4979 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:32:58 crc kubenswrapper[4979]: I0130 23:32:58.422839 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.348420 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qwqd2" podUID="760f6d0d-ff72-4a55-957d-71d0d72a8fe3" containerName="registry-server" containerID="cri-o://19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee" gracePeriod=2 Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.912206 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.964279 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8kc9\" (UniqueName: \"kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9\") pod \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.964365 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities\") pod \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.964435 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content\") pod \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\" (UID: \"760f6d0d-ff72-4a55-957d-71d0d72a8fe3\") " Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.965149 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities" (OuterVolumeSpecName: "utilities") pod "760f6d0d-ff72-4a55-957d-71d0d72a8fe3" (UID: "760f6d0d-ff72-4a55-957d-71d0d72a8fe3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.972054 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9" (OuterVolumeSpecName: "kube-api-access-d8kc9") pod "760f6d0d-ff72-4a55-957d-71d0d72a8fe3" (UID: "760f6d0d-ff72-4a55-957d-71d0d72a8fe3"). InnerVolumeSpecName "kube-api-access-d8kc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:33:00 crc kubenswrapper[4979]: I0130 23:33:00.982951 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "760f6d0d-ff72-4a55-957d-71d0d72a8fe3" (UID: "760f6d0d-ff72-4a55-957d-71d0d72a8fe3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.066685 4979 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.067147 4979 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.067165 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8kc9\" (UniqueName: \"kubernetes.io/projected/760f6d0d-ff72-4a55-957d-71d0d72a8fe3-kube-api-access-d8kc9\") on node \"crc\" DevicePath \"\"" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.363915 4979 generic.go:334] "Generic (PLEG): container finished" podID="760f6d0d-ff72-4a55-957d-71d0d72a8fe3" containerID="19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee" exitCode=0 Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.363995 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerDied","Data":"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee"} Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.364018 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwqd2" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.364092 4979 scope.go:117] "RemoveContainer" containerID="19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.364075 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwqd2" event={"ID":"760f6d0d-ff72-4a55-957d-71d0d72a8fe3","Type":"ContainerDied","Data":"2a447b25143664a67304d059de23e8f6fcf4bd430097e0e4ed2adaf1675e6d7c"} Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.394970 4979 scope.go:117] "RemoveContainer" containerID="1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.395790 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.408413 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwqd2"] Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.421428 4979 scope.go:117] "RemoveContainer" containerID="a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.481597 4979 scope.go:117] "RemoveContainer" containerID="19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee" Jan 30 23:33:01 crc kubenswrapper[4979]: E0130 23:33:01.482420 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee\": container with ID starting with 19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee not found: ID does not exist" containerID="19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.482473 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee"} err="failed to get container status \"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee\": rpc error: code = NotFound desc = could not find container \"19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee\": container with ID starting with 19c6ba84ed7fc6bbb26a713538a5925bcd5203c4a245d10445de9851dd7425ee not found: ID does not exist" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.482499 4979 scope.go:117] "RemoveContainer" containerID="1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7" Jan 30 23:33:01 crc kubenswrapper[4979]: E0130 23:33:01.482922 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7\": container with ID starting with 1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7 not found: ID does not exist" containerID="1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.482945 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7"} err="failed to get container status \"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7\": rpc error: code = NotFound desc = could not find container \"1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7\": container with ID starting with 1f695c701717f5a62d657fb956926ec2d0b0729fd630931ba5b1e4b2c6f153a7 not found: ID does not exist" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.482959 4979 scope.go:117] "RemoveContainer" containerID="a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3" Jan 30 23:33:01 crc kubenswrapper[4979]: E0130 23:33:01.483463 4979 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3\": container with ID starting with a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3 not found: ID does not exist" containerID="a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3" Jan 30 23:33:01 crc kubenswrapper[4979]: I0130 23:33:01.483499 4979 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3"} err="failed to get container status \"a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3\": rpc error: code = NotFound desc = could not find container \"a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3\": container with ID starting with a496752be65368940fed823a021936b8de4d0ccd15fc6659201b245caf48abd3 not found: ID does not exist" Jan 30 23:33:03 crc kubenswrapper[4979]: I0130 23:33:03.079926 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760f6d0d-ff72-4a55-957d-71d0d72a8fe3" path="/var/lib/kubelet/pods/760f6d0d-ff72-4a55-957d-71d0d72a8fe3/volumes" Jan 30 23:33:10 crc kubenswrapper[4979]: I0130 23:33:10.815699 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-84fjt_4e67f5da-565e-4850-ac22-136965b5e12d/nmstate-console-plugin/0.log" Jan 30 23:33:10 crc kubenswrapper[4979]: I0130 23:33:10.984628 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nqwmx_f03646b0-8776-45cc-9594-a0266af57be5/kube-rbac-proxy/0.log" Jan 30 23:33:11 crc kubenswrapper[4979]: I0130 23:33:11.047280 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2xs54_2bf07cc3-611c-44b3-9fd0-831f5b718f11/nmstate-handler/0.log" Jan 30 23:33:11 crc kubenswrapper[4979]: I0130 23:33:11.073088 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nqwmx_f03646b0-8776-45cc-9594-a0266af57be5/nmstate-metrics/0.log" Jan 30 23:33:11 crc kubenswrapper[4979]: I0130 23:33:11.265623 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-f7cxj_63bf7e31-b607-4b21-9753-eb05a7bfb987/nmstate-webhook/0.log" Jan 30 23:33:11 crc kubenswrapper[4979]: I0130 23:33:11.301803 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tv5t2_949791a2-d4bd-4ec8-8e34-70a2d0af1af1/nmstate-operator/0.log" Jan 30 23:33:25 crc kubenswrapper[4979]: I0130 23:33:25.745574 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-t8db4_be7dff91-b79d-4a99-a43b-9cc4a9894cda/prometheus-operator/0.log" Jan 30 23:33:26 crc kubenswrapper[4979]: I0130 23:33:26.187892 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w_a0c76d26-1e50-4da5-8774-dde557bb1c50/prometheus-operator-admission-webhook/0.log" Jan 30 23:33:26 crc kubenswrapper[4979]: I0130 23:33:26.204183 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d_800342ba-21de-4a0e-849e-695bd71885b9/prometheus-operator-admission-webhook/0.log" Jan 30 23:33:26 crc kubenswrapper[4979]: I0130 23:33:26.390350 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5c445_c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9/operator/0.log" Jan 30 23:33:26 crc kubenswrapper[4979]: I0130 23:33:26.427920 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-99mbt_b4d1f5a8-494c-4d68-ac75-0d7516cb7fca/perses-operator/0.log" Jan 30 23:33:39 crc kubenswrapper[4979]: I0130 23:33:39.737142 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6whjn_b9bf7d77-b99e-4190-8510-dd0778767e89/kube-rbac-proxy/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.010799 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-frr-files/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.181638 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6whjn_b9bf7d77-b99e-4190-8510-dd0778767e89/controller/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.223683 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-metrics/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.261131 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-frr-files/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.263109 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-reloader/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.349040 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-reloader/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.559122 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-metrics/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.563086 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-reloader/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.583655 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-frr-files/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.628610 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-metrics/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.757191 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-frr-files/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.765494 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-reloader/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.799114 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/cp-metrics/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.823505 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/controller/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.931138 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/frr-metrics/0.log" Jan 30 23:33:40 crc kubenswrapper[4979]: I0130 23:33:40.985065 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/kube-rbac-proxy/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.049167 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/kube-rbac-proxy-frr/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.159679 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/reloader/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.294471 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-5bgxv_f8932bcf-8e7b-4302-a623-ece7abe7d2e2/frr-k8s-webhook-server/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.449391 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-68b5d74f6-krw7s_30c6b9df-d3aa-4a9a-807e-93d8b11c9159/manager/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.578719 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-545587bcb5-lxtf2_04d21772-3311-4f78-a621-a66fa5d1cb7d/webhook-server/0.log" Jan 30 23:33:41 crc kubenswrapper[4979]: I0130 23:33:41.758809 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v2nkx_6a083acc-78e0-41df-84ad-70c965c7bb5a/kube-rbac-proxy/0.log" Jan 30 23:33:42 crc kubenswrapper[4979]: I0130 23:33:42.434691 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v2nkx_6a083acc-78e0-41df-84ad-70c965c7bb5a/speaker/0.log" Jan 30 23:33:43 crc kubenswrapper[4979]: I0130 23:33:43.473635 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cnk7l_edde5f2f-1d96-49c5-aee3-92f1b77ac088/frr/0.log" Jan 30 23:33:55 crc kubenswrapper[4979]: I0130 23:33:55.617442 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/util/0.log" Jan 30 23:33:55 crc kubenswrapper[4979]: I0130 23:33:55.825992 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/pull/0.log" Jan 30 23:33:55 crc kubenswrapper[4979]: I0130 23:33:55.826020 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/pull/0.log" Jan 30 23:33:55 crc kubenswrapper[4979]: I0130 23:33:55.835882 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.001720 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.015819 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/pull/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.047143 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcn5zfr_3a16a524-cbae-4652-8fbd-e0b2430ec7d5/extract/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.169352 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.317949 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/pull/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.346989 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/pull/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.347502 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.514676 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/pull/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.570503 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/extract/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.573391 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713hgbl9_24460103-3748-49b9-9231-5a6e63ede52c/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.708917 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.901887 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/pull/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.936014 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/util/0.log" Jan 30 23:33:56 crc kubenswrapper[4979]: I0130 23:33:56.942063 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/pull/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.126113 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/extract/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.156294 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/util/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.219166 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cwqjc_20b0495c-9015-4cd9-9381-096926c32623/pull/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.304168 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/util/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.470152 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/util/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.483550 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/pull/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.487492 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/pull/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.642214 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/extract/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.642669 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/util/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.695833 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08hnt26_a4719f7f-2493-47b2-bd3d-3d2edecf2e00/pull/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.827834 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-utilities/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.970044 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-utilities/0.log" Jan 30 23:33:57 crc kubenswrapper[4979]: I0130 23:33:57.977230 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.001697 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.172934 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.257733 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/extract-utilities/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.383056 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-utilities/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.588088 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-utilities/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.631526 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.679656 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.748691 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9jwmc_83f3bbd3-c82f-47bd-92fe-4dbe53982abc/registry-server/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.835716 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-content/0.log" Jan 30 23:33:58 crc kubenswrapper[4979]: I0130 23:33:58.850884 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/extract-utilities/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.045786 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nzltj_ea935cc6-1adc-4763-bf1c-8c08fec3894f/marketplace-operator/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.223613 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-utilities/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.418299 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-content/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.490957 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-utilities/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.497444 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-content/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.617582 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-utilities/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.721646 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/extract-content/0.log" Jan 30 23:33:59 crc kubenswrapper[4979]: I0130 23:33:59.850411 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-utilities/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.048767 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7jr2p_f2ed839b-8a68-4f8d-b12b-dac0b2fae9d9/registry-server/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.093383 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-content/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.100970 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfnsx_eb5ba6de-4ef3-49a4-bd09-1ca00d210025/registry-server/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.119260 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-utilities/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.141577 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-content/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.308575 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-utilities/0.log" Jan 30 23:34:00 crc kubenswrapper[4979]: I0130 23:34:00.322374 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/extract-content/0.log" Jan 30 23:34:01 crc kubenswrapper[4979]: I0130 23:34:01.169433 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k8s6x_ceb321b4-b3fd-4c25-a0cb-8eec1b79a17d/registry-server/0.log" Jan 30 23:34:02 crc kubenswrapper[4979]: I0130 23:34:02.039479 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:34:02 crc kubenswrapper[4979]: I0130 23:34:02.039538 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:34:12 crc kubenswrapper[4979]: I0130 23:34:12.376265 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-t8db4_be7dff91-b79d-4a99-a43b-9cc4a9894cda/prometheus-operator/0.log" Jan 30 23:34:12 crc kubenswrapper[4979]: I0130 23:34:12.399339 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56c79bbdb8-68p2w_a0c76d26-1e50-4da5-8774-dde557bb1c50/prometheus-operator-admission-webhook/0.log" Jan 30 23:34:12 crc kubenswrapper[4979]: I0130 23:34:12.414819 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56c79bbdb8-54d4d_800342ba-21de-4a0e-849e-695bd71885b9/prometheus-operator-admission-webhook/0.log" Jan 30 23:34:12 crc kubenswrapper[4979]: I0130 23:34:12.551595 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-99mbt_b4d1f5a8-494c-4d68-ac75-0d7516cb7fca/perses-operator/0.log" Jan 30 23:34:12 crc kubenswrapper[4979]: I0130 23:34:12.559069 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-5c445_c019a415-f4ef-48f7-a0ce-0ee2e2fc95f9/operator/0.log" Jan 30 23:34:32 crc kubenswrapper[4979]: I0130 23:34:32.039518 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:34:32 crc kubenswrapper[4979]: I0130 23:34:32.040236 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.039746 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.040673 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.040732 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.041804 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.041882 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6" gracePeriod=600 Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.617893 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6" exitCode=0 Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.618067 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6"} Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.618527 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerStarted","Data":"0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6"} Jan 30 23:35:02 crc kubenswrapper[4979]: I0130 23:35:02.618555 4979 scope.go:117] "RemoveContainer" containerID="c200d4efb65f4035f3363f9b7062c1111661233f5aaefe34f30c80e458bcf1d9" Jan 30 23:35:38 crc kubenswrapper[4979]: I0130 23:35:38.007464 4979 generic.go:334] "Generic (PLEG): container finished" podID="a9f91df2-3eb9-4624-a492-49e62aa440f5" containerID="bc463b2ba79389ed187340fac491edd8546c2cc8b0dee8689a7ef810a254f1bd" exitCode=0 Jan 30 23:35:38 crc kubenswrapper[4979]: I0130 23:35:38.007585 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hvtn9/must-gather-w5l49" event={"ID":"a9f91df2-3eb9-4624-a492-49e62aa440f5","Type":"ContainerDied","Data":"bc463b2ba79389ed187340fac491edd8546c2cc8b0dee8689a7ef810a254f1bd"} Jan 30 23:35:38 crc kubenswrapper[4979]: I0130 23:35:38.008955 4979 scope.go:117] "RemoveContainer" containerID="bc463b2ba79389ed187340fac491edd8546c2cc8b0dee8689a7ef810a254f1bd" Jan 30 23:35:38 crc kubenswrapper[4979]: I0130 23:35:38.830747 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvtn9_must-gather-w5l49_a9f91df2-3eb9-4624-a492-49e62aa440f5/gather/0.log" Jan 30 23:35:46 crc kubenswrapper[4979]: I0130 23:35:46.701069 4979 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hvtn9/must-gather-w5l49"] Jan 30 23:35:46 crc kubenswrapper[4979]: I0130 23:35:46.703386 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hvtn9/must-gather-w5l49" podUID="a9f91df2-3eb9-4624-a492-49e62aa440f5" containerName="copy" containerID="cri-o://cc0954d4b7f7b4f173183a7e8e00887cd4fb5316e7c3adf635a220200ba9af70" gracePeriod=2 Jan 30 23:35:46 crc kubenswrapper[4979]: I0130 23:35:46.716815 4979 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hvtn9/must-gather-w5l49"] Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.109070 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvtn9_must-gather-w5l49_a9f91df2-3eb9-4624-a492-49e62aa440f5/copy/0.log" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.109841 4979 generic.go:334] "Generic (PLEG): container finished" podID="a9f91df2-3eb9-4624-a492-49e62aa440f5" containerID="cc0954d4b7f7b4f173183a7e8e00887cd4fb5316e7c3adf635a220200ba9af70" exitCode=143 Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.246465 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvtn9_must-gather-w5l49_a9f91df2-3eb9-4624-a492-49e62aa440f5/copy/0.log" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.247115 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.381485 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output\") pod \"a9f91df2-3eb9-4624-a492-49e62aa440f5\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.381587 4979 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tmmw\" (UniqueName: \"kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw\") pod \"a9f91df2-3eb9-4624-a492-49e62aa440f5\" (UID: \"a9f91df2-3eb9-4624-a492-49e62aa440f5\") " Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.391558 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw" (OuterVolumeSpecName: "kube-api-access-9tmmw") pod "a9f91df2-3eb9-4624-a492-49e62aa440f5" (UID: "a9f91df2-3eb9-4624-a492-49e62aa440f5"). InnerVolumeSpecName "kube-api-access-9tmmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.483493 4979 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tmmw\" (UniqueName: \"kubernetes.io/projected/a9f91df2-3eb9-4624-a492-49e62aa440f5-kube-api-access-9tmmw\") on node \"crc\" DevicePath \"\"" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.553176 4979 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a9f91df2-3eb9-4624-a492-49e62aa440f5" (UID: "a9f91df2-3eb9-4624-a492-49e62aa440f5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:35:47 crc kubenswrapper[4979]: I0130 23:35:47.585908 4979 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9f91df2-3eb9-4624-a492-49e62aa440f5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 23:35:48 crc kubenswrapper[4979]: I0130 23:35:48.123220 4979 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hvtn9_must-gather-w5l49_a9f91df2-3eb9-4624-a492-49e62aa440f5/copy/0.log" Jan 30 23:35:48 crc kubenswrapper[4979]: I0130 23:35:48.124124 4979 scope.go:117] "RemoveContainer" containerID="cc0954d4b7f7b4f173183a7e8e00887cd4fb5316e7c3adf635a220200ba9af70" Jan 30 23:35:48 crc kubenswrapper[4979]: I0130 23:35:48.124329 4979 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hvtn9/must-gather-w5l49" Jan 30 23:35:48 crc kubenswrapper[4979]: I0130 23:35:48.159306 4979 scope.go:117] "RemoveContainer" containerID="bc463b2ba79389ed187340fac491edd8546c2cc8b0dee8689a7ef810a254f1bd" Jan 30 23:35:49 crc kubenswrapper[4979]: I0130 23:35:49.080414 4979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f91df2-3eb9-4624-a492-49e62aa440f5" path="/var/lib/kubelet/pods/a9f91df2-3eb9-4624-a492-49e62aa440f5/volumes" Jan 30 23:36:31 crc kubenswrapper[4979]: I0130 23:36:31.524147 4979 scope.go:117] "RemoveContainer" containerID="f040c130bed11dfc093605a6d4570cd022a74910715c781ada26034f68a76925" Jan 30 23:37:02 crc kubenswrapper[4979]: I0130 23:37:02.039999 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:37:02 crc kubenswrapper[4979]: I0130 23:37:02.040565 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:37:32 crc kubenswrapper[4979]: I0130 23:37:32.042094 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:37:32 crc kubenswrapper[4979]: I0130 23:37:32.042570 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.040995 4979 patch_prober.go:28] interesting pod/machine-config-daemon-kqsqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.041686 4979 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.041796 4979 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.043351 4979 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6"} pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.043487 4979 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerName="machine-config-daemon" containerID="cri-o://0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" gracePeriod=600 Jan 30 23:38:02 crc kubenswrapper[4979]: E0130 23:38:02.172008 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.546685 4979 generic.go:334] "Generic (PLEG): container finished" podID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" exitCode=0 Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.546805 4979 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" event={"ID":"28767351-ec5c-4f9e-8b01-2954eaf4ea30","Type":"ContainerDied","Data":"0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6"} Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.547170 4979 scope.go:117] "RemoveContainer" containerID="294616006c8bfc73947d03a7513be0f73cc0210224fe87a927482fc9adf22eb6" Jan 30 23:38:02 crc kubenswrapper[4979]: I0130 23:38:02.548273 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:38:02 crc kubenswrapper[4979]: E0130 23:38:02.548870 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:38:16 crc kubenswrapper[4979]: I0130 23:38:16.070457 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:38:16 crc kubenswrapper[4979]: E0130 23:38:16.071633 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:38:28 crc kubenswrapper[4979]: I0130 23:38:28.070101 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:38:28 crc kubenswrapper[4979]: E0130 23:38:28.070899 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:38:41 crc kubenswrapper[4979]: I0130 23:38:41.070357 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:38:41 crc kubenswrapper[4979]: E0130 23:38:41.071442 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:38:54 crc kubenswrapper[4979]: I0130 23:38:54.070016 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:38:54 crc kubenswrapper[4979]: E0130 23:38:54.071442 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30" Jan 30 23:39:09 crc kubenswrapper[4979]: I0130 23:39:09.069767 4979 scope.go:117] "RemoveContainer" containerID="0aa1ed4263f330c89a668cc3e378b420e77e35875d284da541d354ecd9b584f6" Jan 30 23:39:09 crc kubenswrapper[4979]: E0130 23:39:09.070533 4979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kqsqg_openshift-machine-config-operator(28767351-ec5c-4f9e-8b01-2954eaf4ea30)\"" pod="openshift-machine-config-operator/machine-config-daemon-kqsqg" podUID="28767351-ec5c-4f9e-8b01-2954eaf4ea30"